Faulbaer's Schlafmulde :: Feb 2010
articles
http://jm.tosses.info

model communities, forums and portals

I'm not impressed. not only that I expected far more photographers on an international portal like model mayhem, the defacto leader of the model photography portal pack, for me as a newcomer next to everything in these web-services feels wrong and yesterdayish. almost all of the user-interfaces appear to be hurried together. most of them look like they had begun as a web-shop, a forum or even a weblog-engine. there is no interconnection, no way out and no way in. there is no way to standardize your input or to update all of those portals at once. there seem to be no APIs anyone can use. there are no iphone apps to work with those sites. the only way to stay current is to keep a tab open in your web-browser.

since all of them claim they'd make our lives as photographers or models or stylists easier, I see some great opportunities ignored here. portals that look like shit, feel like shit and don't have any means to improve won't be able to survive too long.

I guess there will be one to rule them all in the near future. It's key-features will be the following:

- connections to all the important social networks like facebook, flickr, google, twitter, soup.io and so on

- a simple user-interface, streamlined to the needs of the users. models get a model view, photographers get a photographer view etc.

- import-wizards to get your information (and hopefully your contacts) from other sites

- it will be invitation-only for a time

- it will offer spam-detection as well as counter-measures

- it will offer add-words or some similar revenue-stream again for each user individually

- it will offer a recommendation-system for users who interact together in the way facebooks suggests contacts and amazon suggests stuff based on previous interactions

- a subscription will probably make advertisement go away

- with any luck there will be localized standard-contracts for tfp or other shooting-types ready to be agreed on online, printed out and filed before the shoot

- the service will be meshed with everything. images will be hosted at flickr, contracts in google-docs, mail with google, comments with disqus, locations with qype and google-maps, stuff with amazon, ebay and google, contacts with facebook and twitter, paypal for payments etc. why invent the wheel over and over again? most problems have been solved an just need to be pieced together via api-calls

- there will be an iphone-app with notifications to never miss an opportunity

- there will be filters for people with many contacts to keep their inboxes in check

- maybe there will be forums attached to the service but those need to be moderated and that seems to be overkill

I'm not yet sure where to host nude-photography properly since flickr isn't really up to the task for various reasons. maybe those images would be the only ones hosted on the portal.

anyway - I'm pretty sure there will be a new big player in the market within a year and I'll be happy to switch to their service if they even got half of what I proposed earlier.

Faulbaer (currently registered with 5 such services all of which suck plenty)

( tags :: , , , )
[ 2010.02.23, 08:03 :: thema: /english/photography :: link zum artikel :: 0 Comments ]

why I cannot reccomend to buy the drobo anymore

I've been having several problems with my drobo on several occasions. it's not that it's a bad idea or a bad design in general - it's just that in most cases for me it doesn't do the job as advertised and it doesn't help me with the problem that made me buy a drobo in the first place.

let's get into the details, shall we?

let's start at the beginning, while we're at it ^_^

in the beginning there were files and they got copied and changed and copied and change and their numbers and sizes grew and then space just ended and there was no more room for all those directories full of files ever growing in numbers and sizes - and the solution was to get more space, another hard-drive ... and as you might have expected the new one became too small soon again and the next and the one after that and then the first drive failed and I lost a gigabyte or two and I started backing up and keeping versioned redundant repositories on encrypted disks and/or disk-images and suddenly I needed twice the space and in some cases even more ... and I didn't know how to organize all those drives and the repositories and I actually considered deleting everything and starting all over again - not for long because that was around the time drobo launched a neat marketing campaign targeted and perfectly customized to my very needs - it promised to be the solution to all my problems and it did it with a cute girl explaining everything in a well made video on the internet where I live.

I'm not new to the business and I didn't believe everything I saw in the video right away. I did some research and read all the information data robotics provided me with. I reckoned they were cheating a little bit here and there and I expected that the important information was going to be the information they had left out in the video which by then had gone viral - not completely without my help and the help of so many others desperately searching for a solution to their storage problems.

I have a reasonable amount of experience with storage in almost every shape there is. I setup and maintained small, medium sized and large raids - even small and medium sized sans although by the time I decided to buy the drobo it had just been large raids and nas-storages.

all the information I got and all the pondering couldn't spoil my expectations of the drobo which were very high to say the least. it looked as if it was going to be the real deal and although I wasn't going for the 1.0 usb version I totally went for the firewire 800 drobo which proved to become a major disappointment after all.

broken promises

the video had shown a movie continually playing while expanding a drive or recovering a volume to a replaced drive - that is actually not a lie but they could have mentioned that a volume of the size of 4 tb was going to be recovered over days, not hours. if you wanted it to finish recovery within a week you better not touch the drobo while recovering.

the video also hadn't shown that there was actually a size limitation to a drobo - you had to decide on it's volumes' virtual size in the setup process. to date there is no way to grow the drobo beyond that. the maximum size is 16 tb in my firmware.

also there is no way of uploading a new firmware to a broken hard-drive in the drobo which is really a shame. if you are as unlucky as I was you end up with removing and upgrading your hard-drives one-by-one, always recovering and rebuilding the drobo volume.

the build-quality of the drobo is hmkay - it's loud and plasticky. I lost the cover-flap of one drive-bay by removing said drive for the mentioned firmware-upgrade. it looks rather ugly after some weeks of collecting dust. remember: the drobo wasn't meant to be used in a virtually dust-free data-center but at home or in an office where there is dust - tons of it!

I also don't like the way the sockets have been placed or that it has a separate power-supply instead of a proper 115/230 volt connector.

what drives me nuts all the time is that I can't tell the drobo to NOT shut down the drives, to not send them to sleep while not being used. the caching on the drobo seems to be shitty, too and whenever my mac wants to access data on the drobo I hear the spin-up of the drives inside taking ages and after a rather long while I can use the mac again. granted, half of that issue is apple's fault but hey - if those drives didn't sleep the other half wouldn't be such an hassle.

the overall performance of my fw-800 drobo is just poor. I haven't had any equally slow firewire drive after firewire was born. I don't actually get why that's the case. in theory there should always be at least two places to read data from, making the drobo at least double in read-speed of the slowest drive installed - but the reality is that it more like seems to be half the write-speed of the fastest drive installed for reading and a third of the write-speed of the slowest drive installed for writing data.

also accessing data in parallel proves to be problematic if not dangerous. I'm used to move data around quite a lot. I copy this and that to the drobo while reading some and more from it. the drobo sometimes responds so badly to that kind of usage that I tend to reduce all of them to just one job of either reading or writing - that is a big disappointment since I have four drives in that drobo and working with four terabytes of data involves parallel reading and writing a lot.

besides broken promises the drobo itself fails from time to time. that's when you send a mail to their very nice and very well trained support-team. and sure enough they could help me most of the time and I didn't actually lose any data but the drobo failing isn't really an option and taking recovery-times and bad performance into consideration it is kind of a deal-breaker for me, too.

I'm looking into alternatives to the drobo and deep inside me there is a feeling building up to just get rid of the drobo while there is time left - with the next major crash I'm going to switch to a proper networked storage or maybe even back to a file-server. the drobo let me down too often at too many occasions, didn't keep it's promises and can't help with the main issue: logical data-corruption.

the drobo can only fix broken drives and that it does slowly and clumsily. for it's prize-tag this is not enough and I'm not going to buy more drobos for further versioning and redundancy. the device is just not good enough for that and I lost most of the trust I put in it almost a year ago.

so - no, I can't recommend to buy a drobo for my use-cases. it's not that it isn't ready for prime-time - it's just a really bad performer and the only thing it does well, it doesn't as soon as you outgrow it's limitations.

Faulbaer (any recommendations what to buy next?)

( tags :: , , , , , )
[ 2010.02.05, 23:56 :: thema: /english/rants :: link zum artikel :: 0 Comments ]

my wishlist for aperture 3 ... if it's ever going to be released

I have a list of what I'd really like to see in aperture 3:

- ability to load and unload several aperture databases/libraries

- ilife and iwork integration being aware of (if not tracking) several aperture libraries/databases

- better performance in aperture itself as well as the aperture vault backup process

- ability to recover single folders, projects, albums, photos and even versions from the aperture vault backup

- non-destructive aperture library consistence check routines

- improved handling of decentral stored offline- and online-data

- ability to store files on a networked dedicated aperture server for several aperture clients to work with - I mean, it's 2010 dudes, gigabit networks are so yesterdayish, aren't they?

- ability to keep track of and maintain redundantly stored data - I want my photos to be everywhere and I want aperture to care for all the copies to stay current - that's not rocket science, just implement it!

- better web-publishing implementation tapping into services like flickr, ilife, photo-bucket, istockphoto and the like - also there should be easily accessible interfaces to publish to the common blogengines, microblogs and the like

- interfaces to easily work with the plugin-architecture in a non-destructive way. If I change colors, the size or anything in a plugin I just don't want aperture to export and reimport a tiff or psd file - I want plugins to be far deeper integrated into aperture - I'd actually like to see those plugins to dock right into the adjustment boxes, using the same mechanisms apertures adjustment panels do, being placed at a sane position in the workflow - generally before the sharpening and the vignetting steps ... am I the only one to be annoyed with the current way this is done in aperture? plugins should be just another layer to go through in the workflow!

- ability to work with huge files - I'm regularly getting 'unsupported image type' errors when I'm working with photoshop files larger than 300 mb which happens quite a lot. there is no need for aperture to act like that on a machine with over ten gigs of ram almost entirely dedicated to be used by aperture and photoshop

- make batch-processing available to keyboard users. I don't want to dive into menus and sub-menus just to add a tag to several images or to stamp adjustments to more than one photo

- get exif- and geo-data working properly - make adding and changing this data more accessible

Faulbaer (I'm going to add more in the future I guess ... )

( tags :: , , , , )
[ 2010.02.05, 21:38 :: thema: /english/rants/apple :: link zum artikel :: 0 Comments ]

aperture-library spring-cleaning for the brave

first of all, let me warn you: make no mistake - this is dangerous business! if you don't know what you are doing, you can and most certainly will lose some or all of your photos! read everything top to bottom before doing anything else. read everything repeatedly until you know what's going on and what I'm doing here. don't come whining to me if you lost any data - you've been warned!

now that that's off my chest let's dive right into it after a short introduction, shall we?

when apple announces aperture some years ago it seemed to be the perfect solution to all the problems I had been experiencing with iphoto. like every other apple product it wasn't perfect but it did the job much better than most competitors' products - at least after the upgrade to it's second version. unsurprisingly it was not the solution to all the problems but at least some of them had been addressed properly. what were the remaining problems then?

- it's still slow - especially with large libraries

- backups are taking ages and they break easily

- you can lose photos and projects from aperture crashing

- decentralized and distributed storage hasn't been addressed properly

there were several other small and large issues I'm not going to fix here.

most of the problems come down to one issue that can be cured: huge libraries beyond several 100 gb.
with huge libraries, backups need to take longer, going through the previews needs to take longer, aperture will consume more memory and crashes probably lead to larger chunks of data to be lost.

the solution to most of my problems was to shrink my aperture library. I could have deleted several thousand photos but as you can imagine I didn't want to delete my work. what I did was to divide my one large aperture library into many smaller libraries. for all the years until 2006 there was one new library. for the years after 2006 I had a library for each year. for 2009 I'm pondering to split the year in halves or even quarters because this library alone takes up about 550 gb. if I had reoccurring events I might have divided the library not only by date but also by topic.

the problem with the solution

the usual aperture way of doing what I was going to do can be described as the following:

a) open the source library

b) right-click the project and select "export project" from the pop-up menu

c) chose an export path and tell it to move the master-files into the project-file

d) wait and hope aperture doesn't crash

e) right-click the project and select "delete project" from the pop-up menu

f) click a bunch of silly questions away

g) close aperture to make sure the changes have been written to the library

h) wait and hope aperture doesn't crash

i) open the destination library (if it doesn't exist yet, open aperture while pressing the shift-key to make a new library)

j) import the just exported project

k) wait and hope aperture doesn't crash

l) close aperture to make sure the changes have been written to the library

m) go back to a) until all the projects are where you want them to be, shave, go outside and take a look at what the world is like in 2050 ... or whenever you've finished exporting projects one-by-one from aperture.

ok - you could have exported those projects one-by-one but then could have imported them as a folder which could have saved you time - but if you played save, the above list is pretty much what you had to do.

the hacker approach

needless to say that I wasn't going to do this the hard way but the risky hackery way. there had to be a way to get that work done much faster without repeating so many small steps and without answering the same silly questions over and over again ... and there was! It was just a bad idea ;)

as you might know apple uses so called packages for many things that need to be stored. applications, documents, sparse-disk-bundle disk-images and also the aperture libraries are stored in those packages. these packages usually are just a folder with an extension in it's name as well as a bunch of files (usually xml) and folders inside. you can take a look into a package by right-clicking it in the finder and selecting "show package contents" from the pop-up menu. if you were command-line savvy you could just navigate into the package in your preferred shell with the usual cd command.

a few words about the aperture library.aplibrary

the aperture library package isn't too exciting either. it contains some .plist and other xml-files, the main library as well as some folders with the photos organized in project-packages which again contain some xml-files, the photos, their previews and versions, some meta-data and albums as well as smart albums. albums and smart-albums also lie in those previously mentioned folders and either the main library or the .plist files link to them. I decided for saving time and agains saving rogue album data since I only had just about ten albums that were not within project package boundaries but instead in a folder somewhere else. I couldn't find an easy way to import such rogue albums and smart-albums but honestly I didn't care.

how big was it and how long did it take?

my original aperture library was just short of being 800 gb big and it contained about 754000 files. this includes the meta-data, xml files, the photos, previews and versions in different formats like jpeg, raw, psd and the like. to copy this library took me a day, partially because I hadn't invested into fast storage. to back new images up into the aperture vault never took any less than five minutes because aperture needed to check the vaults integrity first - also it had to seek through and write into huge xml main library files almost a gigabyte in size. with this many files and such a large xml library to parse through, performance had to be low.

two ways to shrink it, two days to get the job done

my first approach was to just move all the projects and folders out of the package and delete their references from the database/library by erasing the folders within aperture. I soon found out that this was taking ages and I had to repeat too many steps too often. this was the right approach in only one case - in the case there were less projects to be removed than there were folders to be imported. in the case of photographs imported from iphoto containing all the collected digital photographs I had taken between 1997 and 2006 there were hundreds of folders containing hundreds of projects containing between one and some hundred photos ... to sort through them and consolidate them into less projects with albums referring to them seemed to be far too much work. so what I did was to remove everything past 2006 first on the filesystem-level and then from within the library. all in all that included only six folders: 2007, 2008, 2009, 2010, misc and airsoft. I still had to right-click each one of them, answer some questions and wait for aperture to clean up the database, hoping it wouldn't crash. after each folder I quit aperture and fired it up again. luckily it didn't crash within this procedures - not even once! to finalize the library, I saved it into a new aperture vault of the same name.

my second approach was to open aperture while holding the option-key and by that starting aperture without a library. it then would ask me to select an existing library or instead start with a new one. I chose the latter, added some folders like 2007, 1st quarter, 2nd quarter and so on and afterwards imported the projects I had moved out the aperture library.aplibrary package into their corresponding folders. after repeating this for each of the mentioned folders I quit aperture and launched it again. aperture crashed five times which still makes me angry because after a crash I had to remove the just imported folders, quit aperture, relaunch it and import the same folders again, just to be sure the library was consistent - I've had bad experiences with corrupt aperture libraries in the past. I lost about 70 photos, some of them really nice ones because of those crashes. finally I saved the new library into an aperture vault of the same name again.

it took me two days to complete the task but now I feel much safer and the best thing is that aperture itself feels much faster and much more responsive.

why I did it - the aftermath

now that I know everything worked and still works I can say why dividing up my aperture library into smaller chunks in an ordered way was a good if not great idea.

for one, it feels faster which is not a big surprise. aperture needs to address less memory to parse through a smaller library. it needs less time to load and write into the library. there is less to be backed up if anything changes and there is less to be restored if anything goes wrong.

also the libraries are smaller and therefore much more flexible to be worked with. I could easily archive those smaller libraries to several old hard-drives that I could easily store away off-site. before any of those drives had to be at least of the size of 1 gb and it took me ages to get one backup stored away. time in which I couldn't work with aperture which proved to be annoying several times.

today it's also much easier for me to just fire up the aperture vault backup-procedure because I know it's not going to take too long in most cases. this can prove to be a life-saver in the future because I won't 'forget' about backing up crucially needed data.

it's not all good

there are some downsides I need to mention before you dive into it - it's not just all good. as it is with all the other database-oriented programs from apple, the ilife system doesn't expect you to divide your databases into many smaller ones - I mean ... well ... it's obvious apple doesn't expect you to outgrow their products faster than they make faster hardware available if not affordable to you - but in some cases that's just what happens if you use your computer professionally or . this happened to my itunes library in the past, to my iphoto libraries and now to my aperture library. in the case of itunes apple just got me faster hardware so I could reconsolidate my itnues library back into one. in case of iphoto i switched to aperture which solved the performance issue at least for some years. now in aperture I reached the borders of what I felt bearable and again I divided a library for good - let's see if apple comes up with either a better library/database format for aperture 3 or a better mac pro I can buy that can handle the load.

what are the downsides of a divided aperture library?

not so many to be honest but they are pretty annoying nevertheless.

- itunes get's all confused - it will only put photos from your current library onto your ipods, iphones and apple-tv. this is a biggie for me so I'm considering to export things I always want on my iphone into the iphoto library since I'm not working in iphoto anyway it's more or less static. I'd then fill my media-devices from this iphoto library.

- needless to say that the above is true for all the applications sharing media via the ilife hub. that includes iwork, ilife, ecto and many more. it's really a pity that apple offers to divide libraries but doesn't care for keeping track of those. there should be a way to link to those decentralized media-stores.

- it's annoying to seek through several databases by quitting and relaunching aperture. but that again is apples fault - there is not really one good reason why aperture can't handle several open libraries parallely like mail.app handles several mail-accounts and almost any app handles several open files. I sincerely hope apple is going to address this issue in it's future aperture versions.

- keeping track of several libraries and versioning several libraries could prove to be a hassle in the future. I can imagine deltas between several versions of the same library backed up to different drives in different locations but there are ways to deal with this and it's nothing really new either - it just needs for me to keep it in mind, doesn't it? ;)

a final word

back everything up, quit and relaunch aperture as often as you can. aperture is a beast with memory leaks and it's obese and heavy and stupid. I don't know any better product for the job it does and that's the only reason why I use it. keep in mind that aperture is only there to lose your photos and make your life a misery. so again: make backups before you do what I proposed - even better for you is if you don't do it at all! you are probably better off if you do it the painful slow way or not at all - maybe apple is going to supply us with much faster hardware really soon - you better wait until your new machine has arrived and keep your aperture library.aplibrary intact as it is!

also don't forget that you probably lose some albums and smart-albums not stored in any particular project but a folder or anywhere else.

don't do it and if you do it, make backups (that's plural for backup which is going to be too few - make more than one, make several backups!).

Faulbaer (you've been warned - not once, not twice but three times!)

( tags :: , , , , , , )
[ 2010.02.05, 21:09 :: thema: /english/hacking :: link zum artikel :: 0 Comments ]
mario cart wii
2750-5048-8920
search
tags

25C3, aperture, apple, apps, arbeit, arbyte, bef, berlin, beta, bett, bonn, canon, ccc, chaos communications congress, debian, deutschland, diy, dns, do it yourself, drobo, em, em 2008, embedded, english, essen, fail, flagge, food, fotografie, fussball, german, hacking, internet, ipad, iphone, job, joy, kochen, koeln, konsum, kueche, kvm, london, love, mac mini, macbook air, meinblog, mobile, mobile phone, o2, palm, palm pre, photography, piraten, pre, raid, rant, ranz, saegen, schwimmen, server, smartphone, spielzeug, sport, teuer, translation, uk, updates, vertrag, virtualization, waschen, wochenende, wohnen, workout, wucher, xen, zimbra

categories
archive
network
blogroll
commented
linked
twittered
blog-stuff
signed
angelos widmung johls signatur Malte Fukami nonocat by nonograffix bob
Politiker-Stopp - Diese Seite ist gesch├╝tzt vor Internet-Ausdruckern.
Erdstrahlenfreie Webseite mit Hochbürder Zertifikat

articles