|While I am adding more and more nodes to my custom Raspberry Pi based home automation system (more on that in future posts), I've seen most nodes having a pretty stable wireless connection. The Pi in my garage that is controlling the motorized garage doors for example has an uptime of about 500 days now without any issues.
However, recently one of the Pi's connection became a bit flaky. The setup: A Raspberry Pi 2 Model B with an Edimax Wifi USB dongle. I am using these dongles for most of my Raspberry Pi devices, and had no issues.
When researching (and by that, I mean googling ;) ) I found a surprising number of topics where people report having issues with the Pi not reconnecting when a Wifi connection is dropped.
One solution I found involved rebooting the Pi when it notices that it has lost Internet connection. I find that a little bit drastic, but maybe something that can be done if all else fails. However, I also do not want to have all my Raspberry Pi devices in the house being stuck in a reboot loop, just because my ISP is having issues.
So instead, I was going for a solution that would involve detecting whether internal servers, such as the router, would be available and make the reset decision based on that. Furthermore, instead of rebooting the device, all I want is the Pi to reset the Wifi connection and try to connect again.
Based on a blog post by Thijs Bernolet, I ended up with this script:
Let me explain, what this does...
Line 3 tries to ping the address of my router (192.168.1.1, but you might want to choose a different target in your network) four times. If this succeeds, the return value of the ping command will be '0':
pi:pi /usr/bin $ ping -c4 192.168.1.1 PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data. 64 bytes from 192.168.1.1: icmp_req=1 ttl=64 time=68.5 ms 64 bytes from 192.168.1.1: icmp_req=2 ttl=64 time=11.0 ms 64 bytes from 192.168.1.1: icmp_req=3 ttl=64 time=7.55 ms 64 bytes from 192.168.1.1: icmp_req=4 ttl=64 time=10.9 ms --- 192.168.1.1 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3005ms rtt min/avg/max/mdev = 7.551/24.497/68.515/25.452 ms pi:pi /usr/bin $ echo $? 0
If it fails to ping the router, the return value will be '1':
$ ping -c4 192.168.1.222 PING 192.168.1.222 (192.168.1.222) 56(84) bytes of data. From 192.168.1.205 icmp_seq=1 Destination Host Unreachable From 192.168.1.205 icmp_seq=2 Destination Host Unreachable From 192.168.1.205 icmp_seq=3 Destination Host Unreachable From 192.168.1.205 icmp_seq=4 Destination Host Unreachable --- 192.168.1.222 ping statistics --- 4 packets transmitted, 0 received, +4 errors, 100% packet loss, time 3004ms pipe 3 pi:pi-cam-1 /usr/bin $ echo $? 1
In line 5 we check for that return value. If the value is 0, nothing happens. If the value is not, meaning the router was not reachable after four tries, the script will do do three things:
- (Line 8) Log the date and time of the connection loss in a file. This is so that I can log in from time to time to check how many times this event actually occurred.
- (Line 9+10) Shut down the wifi connection and wait a bit
- (Line 11) bring the wifi connection back up.
What is left to do is run this script on a regular interval. You could do it by simply adding a loop around the code with a sufficient time-out. But if something goes wrong and the script exits, that's it.
A more straight-foward way to ensure the script is called regularly is to add it to the crontab:
$ crontab -e
Then add this line:
*/5 * * * * /usr/bin/sudo -H /usr/local/bin/checkwifi.sh >> /dev/null 2>&1
This will run the script as root using sudo every 5 minutes.
That's it! Make sure the log file (in my case located in /home/pi/restart_wifi_status.log) is writable by root and you're done. Hopefully your Raspberry Pis connection is more stable now.
Note: This above applies to any Linux based machine, and is not Raspberry Pi specific. So if you have some kind of server that is connected wirelessly and has trouble to stay connected, try the above.
|[Update on July 28th 2006] I have updated the post to reflect that CrashPlan 4.7.0 still works the same way as well as adding instructions for running the UI on MacOS/OSX
MotivationI currently have a Synology DS212 (discontinued, but similar to more current DS213), which works fine, but I wanted to have another backup of the data on it off-site, meaning not in my house.
A few years ago I got myself a CrashPlan subscription and have since used to to back up my personal and my families data from multiple machines. Ideally I would like to run the CrashPlan software on my NAS, but the DS212 is not powerful enough to handle the Java-based CrashPlan client.
Since I anyway already have a Raspberry Pi 2 Model B set up as a home server (I am running a custom home automation server and some other servers on it), I thought it would be great to add the CrashPlan client to it.
Getting the CrashPlan headless client running on a Raspberry Pi is luckily possible, but there are a few issues that prevent it from working right out of the box. Thanks to other online tutorials though I've been able to get it working and I want to share here what I did. See the end of this article for a full list of sources I consulted, and without which this would not have been possible.
Aside from this I also want to explain my setup in general: I have around 2TB worth of data on the NAS right now, at least 1TB of which are photos that I want to back-up first. Even with a fast Internet connection this will take a while and I don't want to keep the precious hard disks in the NAS spinning for weeks or even months. They would not be able to go to standby and spin with no pause, decreasing their live expectancy.
So I decided to instead connect a portable USB drive to my Raspberry Pi to which I clone the data I want to back-up from the NAS. Cloning once is relatively fast, since both the Pi and the NAS are in the same network, and incremental syncs will not take long either. I simply mount the NAS drives and then use rsync to sync new files once a day. This is how my setup looks like:
Below you will find details about:
- My hardware set-up and use of rsync
- How to install CrashPlan and get it to run headless on the Pi
- How to connect from a Windows box using the CrashPlan UI
- How to fix the 'Waiting for backup' bug caused by InotifyManager that prevents the backup from starting.
Getting data to the Raspberry Pi
So lets start with the setup first. As stated above I use a simple USB disk as an intermediate place to store my backup before sending it to CrashPlan. I got myself a Western Digital WD Elements 2TB
drive for this, but any somewhat reliable USB drive will do. Just make sure it has enough capacity to hold the data you want to back up. If the drive dies, no problem, your original data is still on the NAS.
Format and mount USB drive
I formatted the USB drive with ext4, which is the standard nowadays for Linux systems and reliable. I created just one large partition and mounted it at /mnt/usb-drive - but of course you can mount it anywhere you like.
The next step is to access the NAS data, for this I simply mounted my "Photos" folder from the NAS by using the following /etc/fstab entry:
//storage/Photos /mnt/storage-photos cifs credentials=/etc/cifspwd,iocharset=utf8,file_mode=0777,dir_mode=0777 0 0
The first part 'storage' is the NAS server name, but you can also use an IP address. 'Photos' is the name of the drive on the NAS I want to mount. Next comes the directory I want to mount it at, followed by the file system (cifs if the standard windows file sharing protocol). I added my username/password credentials to the /etc/cifspwd file and the specified character mode and permissions. Done. Now the drive will be mounted every time the Pi restarts.
Copy data using rsync
With both the NAS and the USB drive mounted, I use rsync to copy and periodically update the files. Now rsync has a ton of options
, but I found that for my setup the following works great:
$ rsync -vr --progress --stats
- -r: Most important, it will sync recursively through all subdirectories
- -v: Verbose logging, in case something goes wrong. Useful when manually executing the command.
- --progress --stats: Also useful when manually executing the command to see what is going on.
I then put the above into /etc/crontab and let it run once during a day, during the night. I decided to go 3 a.m. since this is a time I am unlikely to be online, changing the files on the NAS. If you schedule CrashPlan to only run at a given time frame, it's good to coorindate the rsync time so that it runs before CrashPlan kicks in. E.g. you could have rsync run ar 2 a.m. and then have CrashPlan run from 3 to 7 a.m., for example.
Even without uploading the files to the cloud this is already pretty useful. We created one more copy of the data we care about, so if the NAS for some reason malfunctions and deletes all data, we have one more copy on the USB drive.
Getting CrashPlan to run on the Raspberry Pi
Now having the data in two places locally is great, but the main goal of this exercise is to get the data to a different physical location, in my case using CrashPlan.
Luckily the CrashPlan software is well designed and consists of a headless server and a GUI that can be run separately. This means that you can run the client on a Raspberry Pi without an X-Server and run the GUI on a different Windows, Mac or Linux box, then connect to the Pi through the network.
First, let's get CrashPlan installed on the Raspberry Pi. Luckily they have a Linux version of their software, however it is optimized for x86 CPUs and not the ARM architecture. This means we have to make a few modifications to the package before it successfully runs.
Download and installation
CrashPlan comes packaged with a Java runtime that is compiled for x86 and not ARM. So lets make sure we have Java installed before we get going. We apparently also need the native access library, so use the following to get an up-to-date copy of both:
sudo apt-get install libjna-java oracle-java8-jdk
The latest CrashPlan Linux version can be downloaded from their download site
. At the time I am was writing this article it was version 4.3.0, but I have since upgraded to 4.7.0 (on 7/28/2016) and it works the exact same way. Extract the file, then go into the CrashPlan-install directory that was extracted and execute the install script as root:
$ sudo ./install.sh
I leave everything as is, expect for the location for the backups. I use /home/pi/crashplan-backups instead.
Once you accepted all the configurations the script will download all the files required for the installation, which might take a few minutes. Once it's done it will tell you:
Installation is complete. Thank you for installing CrashPlan for Linux.
So far, so good. It also tells you that you can start the engine using sudo /usr/local/crashplan/bin/CrashPlanEngine start, but unfortunately this will fail on the Pi. Although the output will be "OK", ps aux | egrep -i crash will show you it's not actually running.
The problem is that this version of CrashPlan for Linux is not made for the ARM architecture. Although CrashPlan is running on Linux, it still uses a bunch of native libraries to do its job. And the ones that come with the package are not made the ARM platform. You can find a detailed error in the engine error logs:
$ cat /usr/local/crashplan/log/engine_error.log
/usr/local/crashplan/jre/bin/java: 1: /usr/local/crashplan/jre/bin/java: ⌂ELF☺☺☺☻♥☺►84490: not found
/usr/local/crashplan/jre/bin/java: 2: /usr/local/crashplan/jre/bin/java: Syntax error: "(" unexpected
We need to tell CrashPlan to use our installed Java instead of the one it came with. Some sites say to remove it and add a symlink. An easier way is to simply edit /usr/local/crashplan/install.vars and set JAVACOMMON=/usr/bin/java. Trying to start the engine now we get a bit further, but still no success:
$ cat /usr/local/crashplan/log/engine_error.log
java.lang.UnsatisfiedLinkError: /usr/local/crashplan/libjtux.so: /usr/local/crashplan/libjtux.so: cannot open shared object file: No such file or directory (Possible cause: can't load IA 32-bit .so on a ARM-bit platform)
at java.lang.ClassLoader$NativeLibrary.load(Native Method)
As can be seen, the libjtux.so file CrashPlan comes with is also not ARM compatible, so we need to replace it. The same is true for libmd5.so. Thanks to Jon Rogers, there are patched versions available for both. So remove or rename the old one and get the ones that work:
$ sudo mv libjtux.so libjtux.so.x86
$ sudo wget http://www.jonrogers.co.uk/wp-content/uploads/2012/05/libjtux.so
$ sudo mv libmd5.so libmd5.so.x86
$ sudo wget http://www.jonrogers.co.uk/wp-content/uploads/2012/05/libmd5.so
Starting up the service now should be successful, and engine_error.log should stay empty. The following command confirms that CrashPlan is indeed running:
$ sudo service crashplan status
CrashPlan Engine (pid 11241) is running.
To wrap it up, make sure to start CrashPlan when the Pi boots up:
$ sudo update-rc.d crashplan defaults
Access CrashPlan from a different machine
Now that CrashPlan is running on the Raspberry Pi we need to be able to connect our account and configure it. While this can be done through configuration files, the most convenient way is to use CrashPlan's own UI. Luckily you can run it on a different machine. This can be another Linux machine, Windows or MacOS/OSX which runs the UI and then connects to the engine running on the Raspberry Pi.
CrashPlan has a good tutorial
for this, but emphasizes that this is not an officially supported feature. Here are the highlights for getting this to work on Windows:
Access without SSH tunnel or port forwarding
CrashPlan recommends using an SSH tunnel to forward the port on the server that accepts the UI connection to the computer you are running the UI on. Instead, you can change one parameter to have the engine accept connection from any machine. Edit /usr/local/crashplan/conf/my.service.xml and change the serviceHost value to the following, to allow connections from any computer.
If this is too insecure for you you can put it in the IP of the computer you are going to connect from.
Copy authentication tokenNext copy the contents of /var/lib/crashplan/.ui_info, which we will need to bring over to the machine running the UI. On Window, you can find the file at C:\ProgramData\CrashPlan\.ui_info., on OSX it is /Library/Application Support/CrashPlan/.ui_info. If you have trouble editing the file, make sure to run the editor you use as Administrator, otherwise you might not be able to save the changes (Windows will complain that another program has the file opened, which is not the case, instead a normal Windows user simply has no write access for this file).
Copy the contents of the file from the Raspberry Pi in there. Despite the CrashPlan tutorial, do not change the port to 4200 since we will connect directly without port forwarding. But do change the IP at the end to point to the server (do not leave it as 0.0.0.0, since the UI will not be able to find the server otherwise).
Start CrashPlan UI and configure service
Now you should be able to start he CrashPlan UI form the start menu. If it doesn't work, make sure to kill the CrashPlanDesktop.exe service from the Task Manager and try again.
CrashPlan should now let you log into your CrashPlan account. Configure everything and select the folders on the usb-disk that you want to back up. IMPORTANT: Don't select the mounted NAS directly, the whole point of the USB disk was to let CrashPlan use that location for it's backup.
'Waiting for backup' problem
Now the world would be perfect, however, you will see that the backup is not starting. Back on the Pi we can see the reason:$ tail -f /usr/local/crashplan/log/engine_error.logException in thread "W15324091_ScanWrkr" java.lang.NoClassDefFoundError: Could not initialize class com.code42.jna.inotify.InotifyManager at com.code42.jna.inotify.JNAInotifyFileWatcherDriver.(JNAInotifyFileWatcherDriver.java:21) at com.code42.backup.path.BackupSetsManager.initFileWatcherDriver(BackupSetsManager.java:393) at com.code42.backup.path.BackupSetsManager.startScheduledFileQueue(BackupSetsManager.java:331) at com.code42.backup.path.BackupSetsManager.access$1600(BackupSetsManager.java:66) at com.code42.backup.path.BackupSetsManager$ScanWorker.delay(BackupSetsManager.java:1073) at com.code42.utils.AWorker.run(AWorker.java:158) at java.lang.Thread.run(Thread.java:744)
I did two things to prevent this from happening:
Firstly, I create a new directory for temporary files:$ sudo mkdir /var/crashplan
and added this path to first position of the SRV_JAVA_OPTS
variable in /usr/local/crashplan/bin/run.conf
. So my line now starts like this:SRV_JAVA_OPTS="-Djava.io.tmpdir=/var/crashplan -Dfile.encoding=UTF-8 ...
Secondly, I needed to tell CrashPlan to use the JNA library we installed at the very beginning. To do this, edit /usr/local/crashplan/bin/CrashPlanEngine
and search for the line that defines the FULL_CP
variable, and put the path to /usr/share/java/jna.jar
in the first position. My complete line in the end looks like this:FULL_CP="/usr/share/java/jna.jar:$TARGETDIR/lib/com.backup42.desktop.jar:$TARGETDIR/lang"
Now restarting the service will not show any errors and once I connect with the CrashPlan UI from my Windows box, I can successfully start the backup.
Conclusion + Thanks
This rather lengthy article is more of "notes to self" piece than a classic tutorial, but I think it might be useful for some, which is why I didn't just leave these notes in my folder and instead published them here. There are a lot of useful resources out there that I had to pull together to make this work, so maybe this tutorial is helpful for some that try to do the same.
The following sources were extremely helpful for most of the steps and workarounds above, I couldn't have gotten this to work without them:
BackgroundI am currently working on a project where I want to deploy small battery powered ATtiny based modules to track a few things and to communicate back to an Arduino or Raspberry PI.
While I was first trying to use the Virtual Wire library, I found out it couldn't handle the ATtiny85 that I was operating at 1 MHz. The reason for this is that it has a different timing from chips that run at regular speeds, like the Arduino itself.
Luckily there is an alternative in form of the Manchester Encoding library for Arduino. It works not just with the typical Arduino chips but also with various ATtiny variants. The only problem was that my Arduino IDE setup I had in place did not work with this library. In the following I will outline what the issue was and how I got around fixing it.
My original ATtiny85 setupWhen I started out coding for the ATtiny85 I used sparkfun's excellent tutorial on how to set-up the Arduino environment to compile and upload ATtiny programs. At one point you need to install the hardware specs and code for the ATtiny family. Sparkfun and other places recommended David A. Mellis' repository, which I downloaded and which worked fine for the blink tutorial and a few simple projects. Even Virtual Wire was no problem with this setup.
Switching to arduino-tinyAs I mentioned in the beginning, I had to switch over to the Manchester encoding library. And when I did I got a bunch of error messages about symbols not being found:
error: 'TCCR2A' was not declared in this scope
error: 'WGM21' was not declared in this scope
error: 'TCCR2B' was not declared in this scope
error: 'CS21' was not declared in this scope
error: 'OCR2A' was not declared in this scope
error: 'TIMSK2' was not declared in this scope
error: 'OCIE2A' was not declared in this scope
error: 'TCNT2' was not declared in this scope
Through a post on Stack Exchange I found an alternative library for supporting the ATtiny hardware specs that seems more complete than the one I used before. It's called arduino-tiny and can be found on Google Code at the moment (though it might move since Google Code is in the process of shutting down).
Installing arduino-tiny in the Arduino IDE 1.0 was easy, but I had a few issues with Arduino 1.6 - though I eventually got it to work as well. I've done this on Windows but there shouldn't be a difference for doing this on other platforms.
Installing arduino-tiny for Arduino IDE 1.0Installing arduino-tiny for Arduino 1.0 is pretty easy. The Google Code site has two archives at the moment, one is for Arduino 1.0 and the other one for Arduino 1.5. Download the package for 1.0 and then:
- Extract the folder called 'tiny' from the archive you downloaded and put it into the .../Arduino/hardware directory. So in the end you'll have .../Arduino/hardware/tiny/...
- Navigate into the .../tiny/avr/ directory and create a new file called "boards.txt".
- Either copy the whole "Prospective Boards.txt" contents to this new file or search for the chip you have and only copy out this section. This will make your "Boards" sub-menu in the IDE less cluttered.
- This should be it. Restart the Arduino IDE and you should see the ATtiny in the "Boards" menu. Try to compile e.g. a blink program, it should all work.
Installing arduino-tiny for Arduino IDE 1.6
With Arduino 1.5/1.6 they seem to have changed a bit about where the compilers live and how the hardware specs have to be defined. Luckily the arduino-tiny site has an archive for Arduino 1.5 which already has the specs changed to be compatible. However, I couldn't get it to work with the IDE after following the steps I outline for Arduino IDE 1.0 above. When I did this I got an error when I tried to compile saying that the IDE couldn't find the compiler:
Third-party platform.txt does not define compiler.path. Please report this to the third-party hardware maintainer.
Cannot run program "E:\Program Files (x86)\Arduino\hardware\tools\avr\bin\avr-g++": CreateProcess error=2, The system cannot find the file specified
After looking at the Arduino 1.6 file structure I was surprised to not find the compilers and linkers in the tools directory. In fact I couldn't find them anywhere at first. After some digging around I finally found them in a zip archive under .../Arduino/dist/default_package.zip.
When I tried to Google about it I couldn't find much information, and maybe there is a better way to achieve what I want, but in the end I got it to work by unzipping parts of the archive to make the compiler accessible. Here is the full list of steps I needed to perform in order to compile for the ATtiny with the new Arduino 1.6 and the arduino-tiny hardware specs:
- Following the same steps I outlined for Arduino 1.0, with the only difference that you need to take the arduino-tiny archive for version 1.5. At the time I write this there was no 1.6 package, but the format seems to be the same.
- Next, take a look at .../Arduino/dist/default_package.zip. It contains a folder /packages/arduino/tools/avr-gcc/4.8.1-arduino2 (The exact name/version might be different if you got a newer/older version of the IDE, I am currently using 1.6.2.
- Create a new folder .../Arduino/hardware/tools/avr and extract the contents of default_package.zip/packages/arduino/tools/avr-gcc/4.8.1-arduino2 into there.
- The last thing you need to do is to edit .../Arduino/hardware/tiny/avr/platform.txt. Find the commented compiler.path line and replace it with:
- This should be all. Restart the IDE and try to compile a sketch for the ATtiny, including one that uses the Manchester encoding library.
I hope this post helps some folks to get set-up with their ATtiny and some more advanced libraries like the Manchester encoding library. I have the feeling that there might be an easier way to get the arduino-tiny hardware spec to work with Arduino IDE 1.6. If you know how to get around extracting the AVR compiler, please reach out and I will update this tutorial.
But in any way, I am glad I was able to unblock my wireless project which is using an ATtiny85 to transmit data using a 433MHz transmitter to an Arduino which has a receiver. I will create another article about this little project, but didn't want to spam it with these library and set-up issues.
|So I had this router for quite a while. I bought the EA2700 when I moved to the US, without doing much research. That was a mistake. I had a bunch of the famous WRT54GL routers before and was super happy with them, mainly due to the flexibility of installing an alternative OS on it, which unleashes a lot more power and functionality.
However, the WRT54GL was just too dated in 2012, so I decided to get a newer model and got the Linksys Cisco EA2700. I don't want to go into too many details, but this was a pretty bad decision. The software is just really bad. Almost no functionality and on top they tried to put some 'cloud-connection' stuff that is not only unnecessary, but counter productive. They tried to market it as a "Smart Router". A bit later I bought myself a TP-Link TL-WR1043ND and have been using it ever since.
However, today I found the (now old) EA2700 in a moving box. This inspired me to check the interwebs for the current status of DD-WRT support... and voilà, it appears like DD-WRT is now supported (though the model doesn't show up in their official database).
This thread has the details. I copied the steps here and modified them a bit to reflect changes that happened in the meantime. This worked for me without any issues:
- Download and unzip Linksys classic firmware. If the download doesn't work, try the support page to get the archive.
- Get the latest DD-WRT build for the router. At the time of this writing it's r25309. But I encourage you to navigate the FTP and see if a newer version is available. Also check the thread I linked above for potential issues with specific revisions.
(**Update 2: I recently upgraded the device to r30471 and it's working fine so far! **)
- Reset the router by pressing the RESET button for ~10 seconds. The green power LED will start blinking.
- Log into the router with the default password admin.
- Go to Connectivity -> Basic and click on Choose File under the manual firmware update section. Select the SSA file you unzipped earlier. Hit Start and Yes.
- After the router restarts use admin as the username and password to log in. Even this step is quite satisfying already since this classic firmware removes all the "Smart" junk that they tried to add to the device to make it more appealing. Now the interface is back to what I am used to from the stock WRT54GL. But we are not done yet!
- Now is a good idea to do a good old 30-30-30 hard reset to clear the memory.
- Afterwards log back into the router, which now asks you whether you don't want to install the more shiny rubbish you just uninstalled... refuse and continue to the classic interface. Under Administration go to Firmware Upgrade. Choose Manual Upgrade. Select the TRX file you downloaded from DD-WRT earlier and hit Start Upgrade.
- After the devices rebooted, DD-WRT should greet you. Enter a username and password and be happy :)
Here is what my status page tells me after I have DD-WRT running. Not a lot of free memory, but so far it's been running stable for me:
Next I will probably try to do the same for my TL-WR1043ND V1, which is still running the stock firmware.
|Today I releases a couple of updates to my website. In order to test a few new features, I completely re-did the UI for the pictures. When you go there now you will see some polaroid-style photos. I was inspired by this post from the guys at ZURB.
The albums view now shows some seemingly randomly positioned photos with a custom text that looks like hand-writing. I used the Google Web Font API for this.
While being at it I also replaces all the head graphics with actual fonts. This reduces the number of requests made to fetch the data and the total amount of bytes loaded.
In the next weeks I will try to roll-out some more features to the site. I plan to overhaul the portal page a bit and to optimize performance even more.
By the way, you can now also reach this site at s13g.com
|This week I've got a new toy: The Nikon D7000. Today I had a chance to test it for the first time. I am with my folks in Germany over the weekend and the weather is gorgeous.
One of the great new features of the D7000 is that it can shoot HD video at 1080p and 24fps with auto-focus enabled throughout the shoot. I didn't have a camera that could shoot HD video before, except for my recently purchased Canon S95, which "only" shoot 720p.
After some tests in the garden using flowers and fishes as my subjects I have mixed feelings, although I am mostly positive: The picture quality is stunning. Especially when watching the footage on a big 1080p screen. The DSLR lenses enable you to shoot with shallow depth of field which enables some quite professional looking shots.
So what's the downside you might ask: Although the D7000 is able to focus throughout the shoot, the focusing technique used is not very good. The contrast detection mechanism forces the camera to focus in and out a bit until it gets it right. This should make it very hard to e.g. shoot moving targets reliably. In addition, the constant focusing is quite noisy, which renders the internal mic useless in such situations.
I guess one solution to the auto-focus problem is to focus manually, but this can be quite challenging.
Nevertheless, I am quite impressed by the video performance of the D7000. After all, this wasn't the main reason I bought it, as I mainly want to take great pictures. But having such a powerful video feature at hand is certainly nice. I am sure, I will use it quite a bit in the future.
|It's been a while since I wrote by last blog post. I've decided that it's time get the dust of this blog and to throw a new entry out. And again I hope, that this time I might be able to keep some more posts coming.
Anyway, on to the actual topic of this post: I just released the source code of an application I worked on in my 20% time at Google. I called in PicView. It's a photo viewer application for Android.
What can it do?
It can download and show you photos from Google Picasa users. When you start up the application you can enter a Picasa user name, such as "saschah", which is mine. You will be presented with a list of albums of that user. A tap on the album will show you thumbnails of all the photos in that album. A tap on one of those - you guessed it - will show you the photo full screen.
In full screen mode you can tap on the left or right-hand side of the photo to go back and forth the current album.
The app is also caching heavily. Every download photo and thumbnail is stored on the device.
My plans are to add more feature (see project side which I link to at the end of this article). At the moment you cannot download it from the Market, but you can compile it yourself.
Finally the new website is here. It has been much longer than I thought, but I guess it's normal when you try to finish a degree, start working and spend time on other toy projects.
I started designing this new page a long time ago and I think it took me a year to come up with this design in the end. However, it took me another 2 years to finally push a working version out there that I could live with. For the last year or so it was basically living under my experimental folder and only a hand-full of people have seen it. But now I finally thought, man this is embarrassing and you design will soon be totally out-dated.
Technically, I started with the good old PHP, but then after my internship at Google thought after actively working on the Google Web Toolkit (GWT) to use it for my website. So I started from scratch and implemented a GWT version that was totally dynamic. However I then saw that having a dynamic version brought more disadvantages than advantages and I didn't want to put it out there. After a lot of time I now dugg out the PHP code, polished it and here it is :)
Not everything is working yet as you might see and not everything is super polished. But I figured, I will probably be more motivated to update it once it's out there where everyone can see it :)
A few technical details: Despite using plain old PHP, I make use of Google services a lot to fill this site with content. I didn't want to implement my own content management system, but I also didn't want to use one. So for the articles section as well as for the excerpt on the front page I use blogger and their GDATA API to pull in the articles. For the pictures you see on the frontpage as well as in the pictures sectopn I use Picasa, again with the GDATA API.
I hope you like it, if so leave me a message. I will try to hock up blogger's comment function to my site as well at some point, so this should be easier.
For those folks totally not getting what I am talking about because they are looking at my actual blog, go over to www.haeberling.de :)
You might ask, why I am publishing so early again. It's not been months since my last post. So what the heck... Well, first of all, there is not much time left for me here in Atlanta. My internship is almost over, only two weeks left. It's unbelievable, but the time is speeding up fast - at least this is how it feels like to me. The second reason, why I am blogging again.... I want to focus on pictures this time and not text. There aren't too many pictures that I got for you, as I didn't get around as much as in San Francisco. But still, just to give you a glimpse of what it is like around here, I assembled a little album. Unfortunately I forgot my camera when Maria and I where out in Madison, GA, a neat little town west of Atlanta. But sometimes, the camera was with me and so some photos have been captured.
You can find them on Picasa:
Maybe a last post will follow, depending on my schedule and mood. I have to finish some papers for University in Germany and also need go get some Christmas presents. Not that easy, but I guess you are very aware of that fact already! Well, my last day at work will be Dec 19th and my flight back to Germany is two days later. So only one day for Christmas shopping. But it will work out ..... somehow.
So long for now, from the chilly and windy Atlanta, Georgia
|First of all I want to say sorry for not writing for such a long time. I could say I didn't have the time, but it wouldn't be completely right. It's just that not so much has happened that is worth writing about and I am in touch with many of you anyway.
As my last post was created in Mountain View, let me say that I arrived savely back in Atlanta after staying in California for about a week. After an amazing week at the headquaters of Google, I spent a day in one of my most favourite cities: San Francisco. As it is only a 90 minute ride from Mountain View, I could manage to visit some friends there for a few hours.
Some weeks ago, my girlfriend Maria stayed here in Atlanta for 15 days and I tried to make some room among all the work I have to do. I think it worked out great. Fortunately the working hours are quite flexible so I could shift some days and worked from home.
During those two weeks we managed to see a lot around here and it was actually the first time I could experience the city and its surrounding area. Before that, the only areas I saw were basically the ones on my everyday trip to work and home.
We visited the CocaCola Museum, which was nice but not spectacular. What you basically see there is commercials from all over the world from the beginning of CocaCola until today. In the end you get the chance to taste a lot of different product of the company, which sometimes taste quite bizarre.
A very amazing place to visit when you are in Atlanta is the Georgia Aquarium, where we went on a Sunday. It is one of the world's largest and contains (among other fascinating animals) a large basin with four whale sharks in it (for those of you that don't know: whale sharks are the world's largest fishes. At least that's what we've been told ;)). Great experience although going on a Sunday might not be the best decision, if you got the choice.
Since then I spend my time mainly on working in the office, working at home (mostly on XMl11) and study for university. I want to finish this semester so I can start with my thesis next year.
I will post back if something noteworthy happens. I have made some pictures and will post them online sometime soon. I will post the link there. But don't expect them to be as astonishing as San Francisco! :)
|After making a stop in Denver, I arrived in Mountain View yesterday . Everything went pretty well, except that there are too many hotels called Residence Inn in this area, so even the cab drivers get confused. One of them actually dropped me off at the wrong place. But another one then actually got it right so I had a nice place to spent the night before the big day.
And today it is... the big day. ... my first day of my intern orientation. Of course, I am not allowed to write about anything in detail, but from what I've seen today, this is just a great place to work/live. Free food everywhere, places to relax.... just a great atmosphere. A lot of very talented people all around.
|First step done. I arrived in Atlanta today without any problems. I am currently staying at a hotel in Midtown Atlanta until tomorrow noon. I will then leave for the airport again to get to Denver and finally San Jose.
|With just a few hours left in Germany, it might be a good idea to write a little bit about the current status.
My visa got here in time. Just one day after being in Frankfurt, my passport with the attached visa was in the mail, yay!
I also made arrangements for my stay in Atlanta. I found a nice offer on craigslist for a furnished room in Decatur (which is quite close to Atlanta city itself). After a few talks on the phone and exhanging quite some e-mails with the owner of the house, I am confident that this will work out and that I will have a nice stay there till the end of December.
So tomorrow (Sunday) I will depart in Frankfurt at 9.30am and arrive in Atlanta at around 1.00pm. After I got through the screening, customs and imigration I will head to my hotel in downtown Atlanta, where I will stay for one night. The next day I will again go to the airport again, to get on my plane to Denver. After a quick stop my connecting flight will get me to San Jose.
From there I will head to the hotel in Mountain View, where I will (again) stay for one night till Tuesday. The rest of the week I will stay in corporate appartments of Google.
On Tuesday, my orientation will start at the GooglePlex ('Plex). I am really looking forward to that week in Mountain View.
If I have time, I will try to go to San Francisco for one day to visit some friends, but I don't know about the schedule yet.
So that's it for the moment. The next entry will follow, and it will be from the United States.
|Today I've been to the US Consulate in Frankfurt to apply for the visa. Everything went suprisingly quick. I got there at about 8.20am and left about two hours later with my visa application granted. Now I hope that the visa will arrive here in time.
I also scheduled the flight to Atlanta today. It will depart on September 3rd from Frankfurt and goes directly non-stop to Atlanta. I will arrive there around 2pm. A few days ago I also found a nice person through craigslist, who is renting a room for this time. So after stepping out of the airport I will directly seek for a way to get to this place. I will have some time to rest until my flight on the next day (Monday) will depart to Mountain View. Unfortunately not a non-stop flight. But well, this way I will see Denver... or at least its airport.
So everything seems to be set for a stressful week. After getting back to Atlanta on Sept. 10th I will start my regular job at the Google offices there and I hope to get some routine in there quickly.
|Today I've done some experiments with the layout of this blog. As it turns out, it is very flexible. All the html can be formatted so I implemented a design that I did a couple of days ago for go|west 3. I also added some useful links, replaced the standard ones and removed the blogger bar at the top as I think it's not so useful for me. If you think otherwise, feel free to tell me. I am a newbie when it comes to blogging - so don't bother ;)
I might do some further fine tuning on the site and also start adding some on-topic information as well. So stay tuned (whoever is tuned in).
|After using my own scripted site the time I went to the USA, I thought it might be worth a try to use a blog software this time. I will try to explore the capabilities in designing and integrating this into my own site. If it works this might get a part of my future site, too.
So, sorry for not writing more, I have to do some exploration :)