Faulbaer's Schlafmulde :: english :: hacking
6 articles
http://jm.tosses.info

tp-link tl703n usb power on/off via gpio8

to disable/enable power to the tp-link's usb port just set /sys/class/gpio/gpio8/value to 0 (zero) for power off and to 1 (one) for power on. example on the shell: echo 0 > /sys/class/gpio/gpio8/value

that works very well when used to "command" a usb controlled master/slave multi-way power connector

in my case I use this to power on or off an equinox tizi that is running on the power of the tp-link device. that way I can have the tizi upstairs where there is a dtt signal and switch it on and off from my office in the cellar.

I'm very annoyed by the fact the tp-link now comes with a firmware that cannot be easily changed to openwrt. I will stick to the more versatile alfa networks ap121u (http://www.amazon.de/dp/B00CZ81428 in the future. it also can run just on usb-power assuming you use the right cable: usb to 5,5mm/2,1mm hollow plug

Faulbaer (should check for energy consumption on these ...)

( tags :: , , , , , )
[ 2015.02.09, 00:24 :: thema: /english/hacking :: link zum artikel :: 0 Comments ]

hacking the apc ap9606 on a ap9212

in case you get your hands on a mostly working apc ap9212 unit with an ap9606 comunications module of which you don't know neither ip address nor username/password you may want to try hacking the device via telnet because connecting to the serial port requires a special apc cable _and_ the port needs to be working as well. by the way I'm assuming an ip address has been configured (steady green status led) which prevents the ap9606 to respond to ip addresses set in the arp table by "arp -s".

1. find out the mac address by either attaching the ap9606 to a managed switch and just read the mac address listed for the port or attach it to another computer directly and listen to tcpdump to collect addresses.

2. find the ip address by attaching the ap9606 to a computer and listening to tcpdump. compare the output to the mac address you collected earlier and gather the address that makes sense. in my case it was an arp request and the wanted address was the one following "tell".

3. configure an ethernet interface for the same subnet and connect to the ip address you just found via telnet.

4. now you can log in as any user with the factory password TENmanUFactOryPOWER and dump some valuable information from the specific addresses of the device's flash memory using code "13". for example in my case "1d0" dumped the user/password and a bunch of other junk. The username started with "0u" followed by the username i.e. "0uadmin" and the password followed after a gap of 6x "FF" also in clear text.

5. exit with ctrl+a and re-login using the user/password you just gathered.

for configuring the ap606 I personally prefer the web-gui not because it is good but because the cli is just horrible.

now that you know how to hack the device you will probably want to find a way to secure it against others. me, too.

Faulbaer (it's fun to snmp-control the ports on this old pdu - it works well!)

( tags :: )
[ 2014.12.16, 01:57 :: thema: /english/hacking :: link zum artikel :: 0 Comments ]

trusty old apple tv

just recently I converted my old first generation apple tv into a low energy always on server for monitoring, backups and such. backuppc aparently made it suffer too much but icinga on the other hand works quite well. I also tried ocs inventory and despite the need for mysql this is also running well. I added an apt-cacher daemon for good measure but the first run pushed the little little device over the edge resulting in an average load of 23+. it's behaving well for the moment, though. with backuppc migrated off the system to a tougher machine this little gem runs like a charm. let's hope it stays that way.

Faulbaer (yeah ... should be linking to stuff but you'll know how to google, right?)

( tags :: , , , , , )
[ 2014.05.23, 19:53 :: thema: /english/hacking :: link zum artikel :: 0 Comments ]

djbdns + svn + cron = dns replication

I'm sure this has been done before - probably even pretty often - but since I couldn't find anything on how to make it work properly without spending too much time on it, I decided to write up what I did there.

I wanted to replicate my dns-database between several djbdns installations on different operating systems. When I first set this up, I went for ssh without password. there were some minor issues with that:

- no passphrase (must be something spiritual but I don't like that)
- without rsync involved: one-way only (as in master->slave, slave...)
- with rsync involved still not optimal to keep env, root etc. apart
- no versioning, no roll-back, no safety

on the plus side it's been super-easy to set it up, though.

since I recently bought into a new server plan and just a month before found and old wrap-pc that had been configured as openbsd-dns-server years ago, that was still working, I decided to put some more brain into the matter. these were the results I was looking for:

- it shouldn't matter where to change configuration
- shouldn't be needed to log into the dns server ever again
- versioning/backup/roll-back should be an option
- only parts of the directory should be replicated/distributed
- take into consideration that not every server is the same
- server should pull the updates
- encrypted communication
- authentication required for at least writing

the main trigger was going to be cron like before but I had to find something for allmy other needs. subversion was the solution.

I finally went for a subversion repository with different env-directories for each server as there are IP-address as well as the path to the root-directory configured and those can differ from server to server.

the svn is behind an apache2 just because it's already been there and I couldn't be bothered to make svnserver fly with ssl and authentication. apache works just fine. so we have svn behind apache/ssl and that part took my rusty brain two attempts and maybe an hour to get it working properly. I'm getting too old for this stuff.

the configuration of the servers didn't take much longer and finally writing a script for cron with all the necessary environment variables took another two hours. I bet every half-way routined administrator finishes this in less than an hour.

now let's talk source (and yes, I'm perfectly aware of there being room for improvement - so just let me know if you find something, will you?)

since djbdns/tinydns has become part of the even debian lenny distribution I think you will find the basic setup instructions for your favorite flavor of linux/unit real quick on any major search-engine. so let's dive into the specifics of my setup, will we?

first of all I'm kind of a control-freak when it comes to my servers. I didn't really like to have all domains on one data-file and decided to add a directory for configurations I'd later just "cat > data". also I prepared some files I was going to work with later.

cd /service/tinydns/root/
mkdir data.d
touch tosses.info.data
touch lsrv.net.data
touch make.sh
touch svnup.sh

I went on configuring my .data files like I'd do it for a regular tinydns data file. to assemble the data file into the root directory, I just hacked a little make.sh script like this:

vim make.sh

#!/bin/sh
ROOT=/service/tinydns/root
DATAD=$ROOT/data.d
echo # This file will be generated by $DATAD/make.sh > $ROOT/data
cat $DATAD/*.data >> $ROOT/data
chmod +x make.sh
svn add make.sh
svn ci -m "build your data from data.d with make.sh"

my dnsservers were half-way done. next up had to be the subversion configuration at the svn-server as well as at the dns-servers.

at the subversion server I made a repository just for the dns servers. into that repository I made some directories and finally I added some basic auth ssl configuration to the apache setup:

mkdir -p /var/svn/repositories
chown www:www /var/svn/repositories
su www -
svn create /var/svn/repositories/dns
svn mkdir --parents /var/svn/repositories/dns/tinydns/root/data.d
svn mkdir --parents /var/svn/repositories/dns/tinydns/ns1/env
svn mkdir --parents /var/svn/repositories/dns/tinydns/ns2/env
svn mkdir --parents /var/svn/repositories/dns/tinydns/ns3/env

now I had the directories for data, data.cdb and the .data files as well as env directories for each of my dns servers.

the apache2 configuration was pretty simple as well. after enabling ssl and configuring the ports as well as the virtual hosts, I added the site configurations for my dns repository as follows as root:

vim /etc/apache2/sites_available/svn.tosses.info

<VirtualHost 192.168.12.34:443>
  ServerName svn.tosses.info
  ServerAdmin svn@tosses.info
  SSLEngine on
  SSLCertificateFile /etc/apache2/ssl.crt
  SSLCertificateKeyFile /etc/apache2/ssl.key
  Alias /svn/ /var/svn/repositories/
  <Location /svn>
    DAV svn
    SVNParentPath /var/svn/repositories
  </Location>
  <Directory /var/svn/repositories/dns>
    AuthType Basic
    AuthName DNS
    AuthUserFile /etc/apache2/htpasswd.svn
    require valid-user
    order deny,allow
    deny from all
    satisfy any
  </Directory>
  ErrorLog /var/log/svn.tosses.info-error.log
  CustomLog /var/log/svn.tosses.info-access.log common
</VirtualHost>
htpasswd -c /etc/apache2/htpasswd.svn dns

after a graceful restart I could access the repository and see the directories I had made before. my work on the svn-server was done.

back on the (till then) main dns server I checked out the empty directories and started adding files to the svn:

cd /service/tinydns/env
svn co https://svn.tosses.info:443/svn/dns/ns1/env .
svn add IP ROOT
svn ci -m "IP and ROOT configuration for ns1 added"
cd /service/tinydns/root
svn co https://svn.tosses.info:443/svn/dns/root .
svn add data.d data.cdb
svn ci -m "general data.d directory and data.cdb database for all dns"

so far so good. for cron I still needed an svn update script including the necessary variables. those scripts needed to be executable and I had to add svnup.sh to the crontab:

cd /service/tinydns/root/data.d
vim svnup.sh

#!/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
LOGNAME=root
USER=root
HOME=/root
cd /service/tinydns/root
svn update
chmod +x svnup.sh
svn add svnup.sh
svn ci -m "now svnup.sh will update the svn"

crontab -e

* * * * * /service/tinydns/data.d/svnup.sh > /dev/null

for the other two servers all I had to do was to repeat the checkout of their own env directories and in the root directory I needed to remove data, data.d and data.cdb before a checkput could be successful. but those had to go anyway:

on ns2

cd /service/tinydns/env
svn co https://svn.tosses.info:443/svn/dns/ns2/env .
svn add IP ROOT
svn ci -m "IP and ROOT configuration for ns2 added"
cd /service/tinydns/root
rm -Rf data.d data data.cdb
svn co https://svn.tosses.info:443/svn/dns/root .

crontab -e

* * * * * /service/tinydns/data.d/svnup.sh > /dev/null

on ns3

cd /service/tinydns/env
svn co https://svn.tosses.info:443/svn/dns/ns3/env .
svn add IP ROOT
svn ci -m "IP and ROOT configuration for ns3 added"
cd /service/tinydns/root
rm -Rf data.d data data.cdb
svn co https://svn.tosses.info:443/svn/dns/root .

crontab -e

* * * * * /service/tinydns/data.d/svnup.sh > /dev/null

now every minute every server tries to update from the repository. sure, that's a lot and maybe I even go back to five minutes as checking interval.

the only thing that might become an issue is if I change files in data.d on several servers without immediate check-in. but there are enough ways to fix this in svn.

Faulbaer (I hope I haven't forgotten anything)tt

( tags :: , , , , )
[ 2010.08.23, 08:16 :: thema: /english/hacking :: link zum artikel :: 0 Comments ]

aperture-library spring-cleaning for the brave

first of all, let me warn you: make no mistake - this is dangerous business! if you don't know what you are doing, you can and most certainly will lose some or all of your photos! read everything top to bottom before doing anything else. read everything repeatedly until you know what's going on and what I'm doing here. don't come whining to me if you lost any data - you've been warned!

now that that's off my chest let's dive right into it after a short introduction, shall we?

when apple announces aperture some years ago it seemed to be the perfect solution to all the problems I had been experiencing with iphoto. like every other apple product it wasn't perfect but it did the job much better than most competitors' products - at least after the upgrade to it's second version. unsurprisingly it was not the solution to all the problems but at least some of them had been addressed properly. what were the remaining problems then?

- it's still slow - especially with large libraries

- backups are taking ages and they break easily

- you can lose photos and projects from aperture crashing

- decentralized and distributed storage hasn't been addressed properly

there were several other small and large issues I'm not going to fix here.

most of the problems come down to one issue that can be cured: huge libraries beyond several 100 gb.
with huge libraries, backups need to take longer, going through the previews needs to take longer, aperture will consume more memory and crashes probably lead to larger chunks of data to be lost.

the solution to most of my problems was to shrink my aperture library. I could have deleted several thousand photos but as you can imagine I didn't want to delete my work. what I did was to divide my one large aperture library into many smaller libraries. for all the years until 2006 there was one new library. for the years after 2006 I had a library for each year. for 2009 I'm pondering to split the year in halves or even quarters because this library alone takes up about 550 gb. if I had reoccurring events I might have divided the library not only by date but also by topic.

the problem with the solution

the usual aperture way of doing what I was going to do can be described as the following:

a) open the source library

b) right-click the project and select "export project" from the pop-up menu

c) chose an export path and tell it to move the master-files into the project-file

d) wait and hope aperture doesn't crash

e) right-click the project and select "delete project" from the pop-up menu

f) click a bunch of silly questions away

g) close aperture to make sure the changes have been written to the library

h) wait and hope aperture doesn't crash

i) open the destination library (if it doesn't exist yet, open aperture while pressing the shift-key to make a new library)

j) import the just exported project

k) wait and hope aperture doesn't crash

l) close aperture to make sure the changes have been written to the library

m) go back to a) until all the projects are where you want them to be, shave, go outside and take a look at what the world is like in 2050 ... or whenever you've finished exporting projects one-by-one from aperture.

ok - you could have exported those projects one-by-one but then could have imported them as a folder which could have saved you time - but if you played save, the above list is pretty much what you had to do.

the hacker approach

needless to say that I wasn't going to do this the hard way but the risky hackery way. there had to be a way to get that work done much faster without repeating so many small steps and without answering the same silly questions over and over again ... and there was! It was just a bad idea ;)

as you might know apple uses so called packages for many things that need to be stored. applications, documents, sparse-disk-bundle disk-images and also the aperture libraries are stored in those packages. these packages usually are just a folder with an extension in it's name as well as a bunch of files (usually xml) and folders inside. you can take a look into a package by right-clicking it in the finder and selecting "show package contents" from the pop-up menu. if you were command-line savvy you could just navigate into the package in your preferred shell with the usual cd command.

a few words about the aperture library.aplibrary

the aperture library package isn't too exciting either. it contains some .plist and other xml-files, the main library as well as some folders with the photos organized in project-packages which again contain some xml-files, the photos, their previews and versions, some meta-data and albums as well as smart albums. albums and smart-albums also lie in those previously mentioned folders and either the main library or the .plist files link to them. I decided for saving time and agains saving rogue album data since I only had just about ten albums that were not within project package boundaries but instead in a folder somewhere else. I couldn't find an easy way to import such rogue albums and smart-albums but honestly I didn't care.

how big was it and how long did it take?

my original aperture library was just short of being 800 gb big and it contained about 754000 files. this includes the meta-data, xml files, the photos, previews and versions in different formats like jpeg, raw, psd and the like. to copy this library took me a day, partially because I hadn't invested into fast storage. to back new images up into the aperture vault never took any less than five minutes because aperture needed to check the vaults integrity first - also it had to seek through and write into huge xml main library files almost a gigabyte in size. with this many files and such a large xml library to parse through, performance had to be low.

two ways to shrink it, two days to get the job done

my first approach was to just move all the projects and folders out of the package and delete their references from the database/library by erasing the folders within aperture. I soon found out that this was taking ages and I had to repeat too many steps too often. this was the right approach in only one case - in the case there were less projects to be removed than there were folders to be imported. in the case of photographs imported from iphoto containing all the collected digital photographs I had taken between 1997 and 2006 there were hundreds of folders containing hundreds of projects containing between one and some hundred photos ... to sort through them and consolidate them into less projects with albums referring to them seemed to be far too much work. so what I did was to remove everything past 2006 first on the filesystem-level and then from within the library. all in all that included only six folders: 2007, 2008, 2009, 2010, misc and airsoft. I still had to right-click each one of them, answer some questions and wait for aperture to clean up the database, hoping it wouldn't crash. after each folder I quit aperture and fired it up again. luckily it didn't crash within this procedures - not even once! to finalize the library, I saved it into a new aperture vault of the same name.

my second approach was to open aperture while holding the option-key and by that starting aperture without a library. it then would ask me to select an existing library or instead start with a new one. I chose the latter, added some folders like 2007, 1st quarter, 2nd quarter and so on and afterwards imported the projects I had moved out the aperture library.aplibrary package into their corresponding folders. after repeating this for each of the mentioned folders I quit aperture and launched it again. aperture crashed five times which still makes me angry because after a crash I had to remove the just imported folders, quit aperture, relaunch it and import the same folders again, just to be sure the library was consistent - I've had bad experiences with corrupt aperture libraries in the past. I lost about 70 photos, some of them really nice ones because of those crashes. finally I saved the new library into an aperture vault of the same name again.

it took me two days to complete the task but now I feel much safer and the best thing is that aperture itself feels much faster and much more responsive.

why I did it - the aftermath

now that I know everything worked and still works I can say why dividing up my aperture library into smaller chunks in an ordered way was a good if not great idea.

for one, it feels faster which is not a big surprise. aperture needs to address less memory to parse through a smaller library. it needs less time to load and write into the library. there is less to be backed up if anything changes and there is less to be restored if anything goes wrong.

also the libraries are smaller and therefore much more flexible to be worked with. I could easily archive those smaller libraries to several old hard-drives that I could easily store away off-site. before any of those drives had to be at least of the size of 1 gb and it took me ages to get one backup stored away. time in which I couldn't work with aperture which proved to be annoying several times.

today it's also much easier for me to just fire up the aperture vault backup-procedure because I know it's not going to take too long in most cases. this can prove to be a life-saver in the future because I won't 'forget' about backing up crucially needed data.

it's not all good

there are some downsides I need to mention before you dive into it - it's not just all good. as it is with all the other database-oriented programs from apple, the ilife system doesn't expect you to divide your databases into many smaller ones - I mean ... well ... it's obvious apple doesn't expect you to outgrow their products faster than they make faster hardware available if not affordable to you - but in some cases that's just what happens if you use your computer professionally or . this happened to my itunes library in the past, to my iphoto libraries and now to my aperture library. in the case of itunes apple just got me faster hardware so I could reconsolidate my itnues library back into one. in case of iphoto i switched to aperture which solved the performance issue at least for some years. now in aperture I reached the borders of what I felt bearable and again I divided a library for good - let's see if apple comes up with either a better library/database format for aperture 3 or a better mac pro I can buy that can handle the load.

what are the downsides of a divided aperture library?

not so many to be honest but they are pretty annoying nevertheless.

- itunes get's all confused - it will only put photos from your current library onto your ipods, iphones and apple-tv. this is a biggie for me so I'm considering to export things I always want on my iphone into the iphoto library since I'm not working in iphoto anyway it's more or less static. I'd then fill my media-devices from this iphoto library.

- needless to say that the above is true for all the applications sharing media via the ilife hub. that includes iwork, ilife, ecto and many more. it's really a pity that apple offers to divide libraries but doesn't care for keeping track of those. there should be a way to link to those decentralized media-stores.

- it's annoying to seek through several databases by quitting and relaunching aperture. but that again is apples fault - there is not really one good reason why aperture can't handle several open libraries parallely like mail.app handles several mail-accounts and almost any app handles several open files. I sincerely hope apple is going to address this issue in it's future aperture versions.

- keeping track of several libraries and versioning several libraries could prove to be a hassle in the future. I can imagine deltas between several versions of the same library backed up to different drives in different locations but there are ways to deal with this and it's nothing really new either - it just needs for me to keep it in mind, doesn't it? ;)

a final word

back everything up, quit and relaunch aperture as often as you can. aperture is a beast with memory leaks and it's obese and heavy and stupid. I don't know any better product for the job it does and that's the only reason why I use it. keep in mind that aperture is only there to lose your photos and make your life a misery. so again: make backups before you do what I proposed - even better for you is if you don't do it at all! you are probably better off if you do it the painful slow way or not at all - maybe apple is going to supply us with much faster hardware really soon - you better wait until your new machine has arrived and keep your aperture library.aplibrary intact as it is!

also don't forget that you probably lose some albums and smart-albums not stored in any particular project but a folder or anywhere else.

don't do it and if you do it, make backups (that's plural for backup which is going to be too few - make more than one, make several backups!).

Faulbaer (you've been warned - not once, not twice but three times!)

( tags :: , , , , , , )
[ 2010.02.05, 21:09 :: thema: /english/hacking :: link zum artikel :: 0 Comments ]

riding the wave

sunday night I've finally been invited to the google wave beta or soft-launch if it's anything like orkut, gmail and all the other google projects that have been started up in a similar way.

so ... how is it? is google wave the upcoming replacement for email and web communication? will we start collecting our common knowledge in wiki-like highly dynamical google waves in the near future?

I can't tell - google wave has much potential but in it's current state it's just too messy - there is no order to it and information gets lost easily. verything evolves around searches - if you want to go out and play the first thing you have to learn is how to filter the big public wave - or you are going to drown.

I have to admit that I like the current google wave implementation for what it is - a kind of documentation chat communication wiki technologiy that can be populated easily. it's kind of like a brainstorming device. to make it valuable for more than that it will have to go a long way - it will have to evolve like all the other internet services did before the wave was born.

right now from my point of view it's lacking some important features:

- most of my friends aren't yet on the wave

- there is no really easy (as in bookmarklet) way of adding things to the wave

- there is no way of partially extracting waves into anything else but new waves - I'd rather see some embedded threading like linking where you add stuff from one wave to another but where it's only displayed in the new wave but edited/stored in the originating wave

- I need to be able to get rid of contacts - I guess that's just a bug right now

- rights management isn't yet what I'm looking for

- I'd really like to install my own wave on my own server, connecting to the wwwave on my terms

- I'd like to be able to participate privately without my (nick)name showing up in every wave I visit

- there should be a way to easily clean a wave up - or collapse less important communication

- it should be distiguishable if parts of a wave are communication, comments or information and not just a big whirly mixture of all of it - maybe I'll need to read the manual

there are lots more things that will have to be worked out but I have to admit that google wave is very charming and I really like it.

right now I have to tell you that I can't invite anyone to google wave - I never could - so be patient - sooner or later either I'm going to be able to invite people or google wave is going to go public before that.

Faulbaer (riding the wave)

( tags :: , , , , , , )
[ 2009.10.13, 20:19 :: thema: /english/hacking :: link zum artikel :: 0 Comments ]
mario cart wii
2750-5048-8920
search
tags

25C3, aperture, apple, apps, arbeit, arbyte, bef, berlin, beta, bett, bonn, canon, ccc, chaos communications congress, debian, deutschland, diy, dns, do it yourself, drobo, em, em 2008, embedded, english, essen, fail, flagge, food, fotografie, fussball, german, hacking, internet, ipad, iphone, job, joy, kochen, koeln, konsum, kueche, kvm, london, love, mac mini, macbook air, meinblog, mobile, mobile phone, o2, palm, palm pre, photography, piraten, pre, raid, rant, ranz, saegen, schwimmen, server, smartphone, spielzeug, sport, teuer, translation, uk, updates, vertrag, virtualization, waschen, wochenende, wohnen, workout, wucher, xen, zimbra

categories
archive
network
blogroll
commented
linked
twittered
blog-stuff
signed
angelos widmung johls signatur Malte Fukami nonocat by nonograffix bob
Politiker-Stopp - Diese Seite ist gesch├╝tzt vor Internet-Ausdruckern.
Erdstrahlenfreie Webseite mit Hochbürder Zertifikat

6 articles