$100 Development Laptop

At the beginning of 2018 I switch to Linux as my ‘daily driver’. I have a desktop and laptop from System76 running Pop_OS!. I’m super happy about that switch.

Linux is everywhere; from Rasperry PIs to supercomputers. It runs nuclear submarines, refrigerators, air traffic control and my Drupal development environment. It can run fantastically on modern hardware and bring life back to the forgotten computer in the closet.

I wondered if it was possible to configure an under $300 laptop for Drupal development. I started by looking at low-end consumer laptops. Best Buy has Intel and AMD laptops, 4-8GBs Ram with an SSD, preinstalled with Windows, on sale in that price range. I asked for recommendations in an online Linux community. I was advised by many not to buy consumer ‘junk’, but instead, look for used enterprise-class laptops on Ebay in the same price range or less. In my research, I found a cult-like following for the Thinkpad T420, a laptop released in 2011 (see video The $110 Lenovo Thinkpad T420, a Laptop with a Legacy). It’s known for its durability, performance and ‘old-school’ keyboard. That seemed like a reasonable place to start.

Acquisition

The price range on eBay for a T420 with an SSD was between $125 – $225. I found one without an HD (or SSD) or power cord. With those two essential parts missing, I was highest bigger at $45 (plus 12.97 shipping). This T420 has as i5-2520M @ 3.200GHz with 8GB Ram. I bought a 240GB SSD for $28.95 and power cord for $10.99. At worse case, this would be a failed $97.91 experiment.

ThinkPad T420

You never know what you’re going to get with used equipment. The Thinkpad was in surprisingly good shape. As promised, everything was in working order with normal wear for a nine-year-old computer. I unboxed it, slid in the SSD and had Manjaro Linux installed in 10 minutes. Manjaro is my first experience with a Linux distribution outside of Pop_OS!.

Infamous Linux Wifi Issue?

I had one hiccup, the wireless card wouldn’t work. It wasn’t a big problem because the T420 has lots of ports, one of which is ethernet (take that Apple). I tried to get wireless working late into the evening, then decided to install the distribution I was familiar with, maybe it was a software issue. I installed POP_OS! and immediately identified the issue from a message, something like “wireless hardware switch is off”. What!!?? Sure enough, there’s a small hardware switch on the side of the T420 to turn off blue tooth and wireless. This problem was undoubtedly not software or hardware related. I decided to leave Pop_OS! running, I will experiment with Manjaro at another time.

Wifi Hardware Swtich

Drupal Development

The primary software requirements for my Drupal development includes web browsers (Chrome and Firefox), Lando (and required software) and VS Code (IDE). While there are many other tools I use day to day, those are the must-haves. Outside of Docker needing some extra attention, loading this software was straight forward. I’m was up and running in short order.

Observations

After a month of using the T420 as a second laptop for Drupal development and general computing tasks, my observations are:

  • The Thinkpad T420 is a solid computer. It feels and is a quality build.
  • I like the feel of the classic, “clicky clack” keyboard. It’s easy to use.
  • I missed a laptop with lots of ports and single purpose buttons
  • This computer is fast, not just fast enough. While I didn’t push the limit with lots of containers running at the same time I’m editing audio on a Zoom call, it performs well.

Summary

This T420 running Linux is at least a solid backup computer, and maybe a daily driver for most developers. It feels good to sit behind a classic laptop, running a current OS, while building a modern website. Maybe it’s like cruising down the road in a 1964 Mustang. Is it a fluke that I was able to put this system together for under $100? No. I’ve bought two more and have done the same thing.

Backstop JS and Lando

I’ve recently started using Backstop JS for visual regression testing. You can add the following settings to your Lando configuration, .lando.yml, to include Backstop JS.

services:
   node:
     type: node:10
     globals:
       backstopjs: "latest"
     command: npm start
     run_as_root:
       - apt-get -y update && apt-get -y install software-properties-common
       - wget https://dl-ssl.google.com/linux/linux_signing_key.pub && apt-key add linux_signing_key.pub
       - add-apt-repository "deb http://dl.google.com/linux/chrome/deb/ stable main"
       - apt-get -y update && apt-get -y install google-chrome-stable
 tooling:
   backstop:
     service: node

You will need to use the .internal URL from Lando to access the local website. For example:

  "scenarios": [
    {
      "label": "Homepage",
      "url": "http://appserver_nginx.your-site-name.internal"
    }

The backstop commands will be run through Lando tooling:

lando backstop reference
lando backstop test

Recovering Home Video

I have forty-six tapes with home videos that cover family events from 1996 to 2005. These tapes have been stored in a box for as many years. I recently took on the project of moving them to a format we can enjoy. My two challenges: how to get them off of the tapes and how to make them accessible to everyone in the family.

Camera to Computer

The camera used to take these videos, a Sony Digital HandyCam DCR TRV 120 , is still operational. I made that discovery after purchasing a power cord from Amazon. Then I needed cabling to connect the camera to my computer. This was hit and miss because it wasn’t clear to me which cables would work. I’m thankful Amazon has a generous return policy. I had three cable attempts fail. I settled on a combination of two cables that allowed me to go from a 1/4 inch camera output to composite and then composite to USB. The composite to USB came with drivers and software to capture video on a Mac.

Cables:

HDE 3ft. Feet RCA Male to 3.5mm Male Jack Composite Audio Video A/V Cable

S-Video / Composite to USB Video Capture Cable Adapter w/ TWAIN and Mac Support – VHS to USB Composite Svideo

Capture Process

The capture process requires each tape to be played from the camera and recorded on the Mac. A 45-minute recording takes 45 minutes to capture. I learned the capture process is a bit fragile. After recording eighteen videos (aka eighteen hours of recording), I discovered eleven videos had no audio or the audio was out of sync. I made some changes to my process to ensure the remaining recording would go well.

  • The video capture software should be the only application running on the Mac during the capture process
  • Play the audio through the Mac, not the camera, to verify the audio is getting to the camera
  • Rebooted at the start of each recording session or after every four tapes

Some tapes had sixty minutes of content while others had thirty-five. The best process for me was to let the 60-minute tape run through and edit out the blue screen when it completed. This allowed me to do other things while capturing the video. When the recording was complete, I would open the resulting .mov file in QuickTime and trim out the blank recording from the end.

Serving Home Video

I chose to serve the home videos through Plex. Plex allows you to store, manage, and stream your personal media. I expect I will be using Plex home movies and maybe photos in the future. Plex is an open source application that requires a central server to stream content to Plex clients. A Plex client can be almost any device. For me it will be Apple TV, iPads, iPhones, a Roku, and Fire TVs. Content can be streamed in and out of my home network. With grown-up children, having remote access to the video content is important.

After a bit of research, I learned that a Raspberry Pi could be used as a server, but it may not be powerful enough. Since I had a Pi 3, I decided to give it a try. Comfortable with Linux and the command line, I had a Plex server running on the Pi in 15 minutes using a resource like How to set up a Raspberry Pi Plex server. I connected a USB external drive to the Pi to store the 140GB of home videos.

It worked! The quality of the video is fantastic. When streaming a 30-minute video from a device, it will stop a few times and buffer. A “your server is not powerful enough…” type message will also appear, but it works. (See Update 12-31-18 below)

Next Steps

Plex Server Upgrade – My next project is to upgrade the $35 Raspberry Pi to a more powerful single board computer (SBC). I’m looking at a RockPro64 or NanoPC T4 with 250GB m.2 storage. I think this will deliver my minimal needs and not break the bank. And more importantly, it’s a fun tech project. Stay tuned. (See Update 12-31-18 below)

Video Editing – I discovered that the 19-year-old video labeled as Christmas 1999, was really four events starting in December, 1999 and ending in July, 2000. Now that I have the videos on my computer, I’ll be breaking them into smaller videos. No commitment on when this will be complete.

Moving forward – We all take lots of video with our smartphones. For me, it’s not intentional, long-form video, like my Digital 8 tapes. It’s short bursts of interesting things. Moving forward, I need to figure out how to aggregate that video in a format my kids can enjoy in 25 years.

UPDATE: 12-31-18

It turns out, I don’t need to upgrade the Raspberry Pi, I just needed to educate myself on video formats, transcoding, and Plex. As I utilize more features of Plex in the future, I may need more power than a Raspberry Pi 3 provides, but for now, it will work fine to serve my fifty home movies.

The power of Plex is its ability to transcode video for the device viewing the content. When converting my video from tape to digital, I create .mov files. When viewing these videos on Apple TV or iPads, they are transcoded from MPEG to H.264. This is a CPU heavy process.

Plex provides the ability to pre-optimize videos and save them on the server. For my videos, that format is H.264 at 480p resolution. When viewing a pre-optimized video, Plex doesn’t need to transcode the video, just stream it, which is not CPU intensive. This is called Direct Play.

When you have a low powered Plex server, like a Raspberry Pi, the goals is to Direct Play all videos by pre-optimizing them for the devices they are viewed on.

Pre-Optimizing is very easy. You simply choose one or more videos and select the Optimize option. Since this process is CPU intensive, it may take a long time for each video, but it’s only a one-time process.

From Mac to Linux

In 2005 I made the switch from Windows to Mac as my primary working environment. In 2018 I made a similar switch to Linux. In both cases, the change was somewhat gradual, and the process was the same. In 2005 purchased the newly released Mac Mini and set it up on my desk to the side. Over a few weeks I got comfortable with MacOS, and eventually, my Windows computer was moved to the side. The same happened at the end of 2017. I purchased a Meerkat from System76, which has a similar physical profile as the 2005 Mac Mini. It too sat to my side as I became familiar with the Linux desktop experience. Linux is now my primary os.

Why switch? For me, it was practical reasons.

Knowledge. 80% of my computer time is spent doing web development on a LAMP stack (Linux, Apache, MySQL, and Apache). Linux is at the core of my local development environment, as well as, the server environments my websites run on. Like most Drupal developers, I’m doing more DevOps, all of which is based on Linux software. The primary reason for my switch was to spend more time in the Linux environment to improve my Linux knowledge and skills.

Hardware choices. While I’m an Apple fan and will continue to use a Mac and iOS devices, they frustrate me. My daughter still uses her 2010 MacBook Pro, that’s possible because I could upgrade the RAM and change the hard drive. The hard drive has been replaced twice, first an upgrade to a 250 GB hard drive, then a 500GB SSD. I believe that was the last Mac Book you could upgrade. Moving to Linux gives me unlimited choices in hardware. Desktops and laptops configured how I choose and they can be updated and modified as I need them to.

It’s Possible. Linux distributions and open source software has matured to the point that it’s possible for me to use Linux exclusively. I’m currently using the Pop_OS Linux distribution. From a user interface perspective, it’s as elegant and powerful as Mac OS. While it lacks the level of integration of the Mac, it’s refreshing to have less integration. It feels lighter and less bloated. What about MS Office? Libre Office is a fitting replacement. I discontinued my Office 360 subscription. I’m finding that Linux could also use the tagline “there’s an app for that.”

Performance. Linux on current hardware is fast.

I don’t believe I’ll switch back to Mac, but who knows!

Static Websites with Sculpin

his blog, on 8/14/18, is static HTML generated by Sculpin, a static site generator written in PHP. Sculpin converts Markdown files, Twig templates and standard HTML into a static HTML site that can be easily deployed.

What is a Drupal guy doing with a static HTML website? Truthfully, just tinkering with technology. I like the idea of simplifying and minimizing. For simple blog, is it necessary to have a sophisticated content management system using a database and generating a website dynamically? That’s the question I’m exploring with Sculpin.

Why Scuplin? There are many frameworks available for generating an HTML website, the most popular is Jekyll, like Sculip, it’s open source, but built on Ruby and Liquid. Sculpin is a built on PHP and Twig, which pairs nicely with my Drupal experience. It was an easy decision to start with Sculpin. I may take a look at other Ruby and Python options in the future.

I’m not a Sculpin expert, but I’m learning. I downloaded the sample blog implementation to use as a working reference. Following the Sculpin Getting Started instructions, I quickly had the sample website running on my local computer. After looking through the sample website code and documentation I started a new website. I used a simple bootstrap theme, converted it to TWIG and had my blog running in a few evenings.

Sculpin Basics

Content

Content resides in a markdown files (.md). In my blog implementation, the content is either a blog post or simple page. In Drupal terms, this content is known as an Article or Basic Page. To create a new blog post, I simply create a new markdown file with some meta data at the top of the file. The example below is a blog post with meta data.

---
title: At home on July 4th
author: Stephen Cross
tags:
    - Life
primary_image: /images/blog/home_primary.jpg
primary_image_alt: Mantle with patriotic HOME sign
list_image: /images/blog/home_list.jpg
list_image_alt: Mantle with patriotic HOME sign
---

It's July 1st.  In most parts of the US, that means three days until a day off to celebrate the nations birthday.  In Bristol RI it has a very different meaning.  The 4th of July is a significant event, it's THE event.   Planning for July 4th, starts on July 5th the previous year.   The celebration begins in June and climaxes with the oldest parade in the country.   

...

We are days away from our second 4th hosting.   It's exciting to have family and friend join us for this purely American, hometown celebration.  

Happy 4th.

Templates

Sculpin uses Twig for a templating engine. Twig is straight forward and there is an abundance of references on-line.

Generate

When your templates are setup, and you have some content, you then generate the website. Scuplin use the markdown and templates to output a stand alone HTML website.

vendor/bin/sculpin generate

Sculpin also provides a web service to view your website locally as you work on it. The command below kicks off the service, which monitors your source code and automatically refreshes the HTML as you update markdown and templates. I can view the website at http://localhost:8001.

vendor/bin/sculpin generate --watch --server --port=8001

Detected new or updated files

Generating: 100% (221 sources / 0.00 seconds)
Converting: 100% (232 sources / 0.08 seconds)
Formatting: 100% (232 sources / 0.01 seconds)
Processing completed in 0.13 seconds
Starting Sculpin server for the dev environment with debug true
Development server is running at http://localhost:8001
Quit the server with CONTROL-C.

Deployment

When development is complete, you generate HTML to a new folder, /output_prod:

vendor/bin/sculpin generate --env=prod

You can then push the HTML to the webserver with an scp or rsync.

rsync -avze ssh  output_prod/ user@website.com:/var/www/html --delete

Workflow

After the website is built, the daily workflow is:

  1. Create new post in a markdown file, ie. _post/2018-08-01-Hello-World.md
  2. Generate HTML
  3. Push HTML to Server

For me, these are manual steps, but I plan to change this workflow to use a continuous deployment strategy. As I push updates to my git repo, the HTML will be automatically generated and pushed to the server.

Benefits

Although I’ve recently started using Sculpin and my knowledge is limited, I see the benefits as:

  • Performnace – HTML websites are super fast.
  • Low powered production server – The website can run on almost anything. The only runtime requirement is a web server. It have StephenCross.com running on the smaller Digital Ocean VPS with only apache install. I look forward to trying a Rasberry PI.
  • Low maintenance – I don’t have to worry about frequent stack updates.
  • Backup – While I’m backuping up the server weekly, it’s not necessary. The content and tools are on my local computer and in a git repo.

Cons I haven’t found many drawback yet. I do find the documentation sparse with few online examples and resources.

Next Steps

I see the next steps as:

  • Implement a CI process
  • Implement a search mechanism
  • Add sidebar to view by category

Check back for updates!

Resources

Updates

8/16/18 – Added SSL. It’s pretty straight forward to add SSL following this community post on Digital Ocean for Let’s Encrypt.

8/17/18 – Added Disqus for commenting. Sculpin provides this capability.

11/28/19 – Time for a changed, still like sculpin, but going to checkout WordPress.

At home on July 4th

It’s July 1st. In most parts of the US, that means three days until a day off to celebrate the nations birthday. In Bristol RI it has a very different meaning. The 4th of July is a significant event, it’s THE event. Planning for July 4th, starts on July 5th the previous year. The celebration begins in June and climaxes with the oldest parade in the country.

I grew up in Bristol RI. I marched in the parade as a member of the Bristol Highschool drumline. Growing up in Bristol is like growing up as an elf in the North Pole. You know it’s special, but it gets old, and you want to get out, to be a dentist. I moved out of Bristol in my early 20s. Over the next 15 years, I didn’t regularly return for the 4th. As my girls became teenagers, they had friends that lived in Bristol and we started returning for the parade, then fireworks on July 3 AND the parade of July 4th. For the past 10 years we found ourselves in Bristol on or around the 4th of July. The charm of this small town, with a proud identity, began to look like home.

There is a “what if” conversation you have with your spouse; “If we could move anywhere, where would it be?”. For Erica and I, Bristol was always on the short list. A very short list it was. In 2016 we found ourselves with an opportunity to relocate to Bristol. We celebrated our first July 4th with family and friends in 2017.

We are days away from our second 4th hosting. It’s exciting to have family and friend join us for this purely American, hometown celebration.

Happy 4th.

When Software Lifts Hardware – Outcast on Apple Watch

Over the past year, I’ve prioritized exercise in my life, most days spending an hour outside or at the gym. I take advantage of that time to listen to podcasts and audiobooks. It took two devices to make this work for me, a health tracker (Fitbit) and phone (iPhone).

While I had wanted an Apple Watch since it’s introduction, having the ability to leave your phone behind made the Series 3, released in October 2017, a compelling product. I upgraded my Fitbit to an Apple Watch, Series 3 Cellular, shortly after its release.

My dreams of untethering from my iPhone during my 3.5-mile walk vanished quickly. While the Series 3 Cellar Apple Watch works well without my phone, it doesn’t support podcast playback from Apple’s Podcast app, and Apple has not made it easy for developers to create podcast apps. In fact, the podcast app I use, Overcast, removed their support for Apple Watch with the release of watchOS 4. Also, there is no Audible playback on the Apple Watch.

I was happy with my move from Fitbit to Apple Watch, but I was still talking my phone with me, which is frustrating when I have a capable computer on my wrist. I tried a few apps claiming to support podcast playback on the Apple Watch, but they were too hard to use and very unreliable.

Then along came Outcast! While not perfect, the folks at Crunchy Bagel battled through Apple’s developer limitations and created an app that works. While Outcast allows you to search and enter podcast feeds to the app manually, I was able to import my existing podcast list from my podcast App Overcast. In a few minutes, I was running the bike path, listening to the Vector with Rene Richie podcast while my iPhone was on the charger at home.

A $.99 app lifted the value of my Apple Watch, to me.

I now wait for Apple to provide the watchOS features developers need to create a quality experience. Hey Audible, what up?