Tuesday, December 27, 2011

Why I'm Terrified of the SOPA du Jour


Full disclosure: what follows is a factually informed but nonetheless angry rant

As many of you may know, an inimical piece of legislation currently barrels through Congress unimpeded by logic or regard for consequence. As you have probably guessed from the clever turn of phrase I've chosen for a title, I'm talking about the Stop Online Piracy Act. The name of the bill alone qualifies it to stand in a league with other incontrovertible bills such as the No Child Left Behind Act and the Patriot Act. Who doesn't want to Stop Online Piracy? Probably the same people who want to leave children behind. Why, you might ask, has an inarguably good bill raised the heads of the largely apolitical software development community? Simply put, because if the bill were to pass it would make the Internet worse. Period.

As a staunch opponent of the bill, I've become frustrated not only by the proponents of the bill but also by my fellow opponents. I don't think saying that opposition for the bill lies primarily in the technical community is a stretch. Many non-technical people hear the purpose of the bill, decide that it sounds like a great idea, and promptly sign on with their support. I don't fault them for this. Why wouldn't they? At this point, technically minded opponents of the bill (bear with me here) respond with statements like "SOPA will break the foundation of the Internet" and "SOPA would spell the end of the free Internet". While true, the first statement is so broad as to be meaningless, and the second one provokes the question "why should I care?" Let's dive in and see 1) why the bill is so bad, and 2) why you should care.

I'm going to try to clarify some technical issues for non-technical people precisely because the folly of the bill lies in the technical implementation, not the spirit. When you type in "www.google.com" in the address bar of your computer, your browser fires off a request to a remote computer asking "where does www.google.com" live? The response to this query is a string of numbers known as an IP address, or "internet protocol" address. An IP address tells routers (all the computers your request passes through between your computer and Google's server) where to send your traffic. Without that IP address, you can't communicate with Google. The computer servicing your request for an IP address lookup is known as the DNS, or "domain name system". Practically, the DNS is a series of computers sitting at a large institutions mostly in the US. Without the DNS, looking up IP addresses is difficult. More on that later.

SOPA operates by placing restrictions on the operators of the DNS. This group includes universities, large corporations, and non-profits. Basically, if a site is found to be in violation of copyright law (more on how this is determined later), the operators of the DNS are required to remove that IP address from their registry. Then, when someone requests the address of a violating site, it is nowhere to be found. The site has simply "disappeared". This not only places an undo burden on the DNS operator, it violates some core principles of uniform naming that have allowed the Internet to become the largest information network ever seen. I don't have time to explain it completely here, but basically you get something like China's fragmented version of the Internet.

SOPA targets sites that make money by trafficking in pirated copyrighted content. Congressmen love to characterize these sites as "rogue offshore sites" because nobody is going to leap to the defense of a site that fits that description. The problem lies in the fact that SOPA contains no provisions that lead me to believe it will be limited to these obviously bad websites. Any site can fall victim to a SOPA censorship request--including legitimate sites that you use every day. The ease with which this can happen is downright scary.

In order for a site to fall victim to a SOPA takedown request, all that is required is a "good faith" complaint from a third party that the site is hosting copyrighted content. The target of the complaint doesn't even need to be notified that they are the subject of a complaint. The courts can just shut the site down. "Done. Game over. I'm sorry you were a legitimate multimillion dollar company employing hundreds of people. One of your competitors told us you were stealing, and we believed them. Better luck next year." As you can tell, it's hard to put into words how inane this policy is. And that's just a non-technical flaw in the implementation. The technical flaws are even more gaping.

As mentioned before, the primary mechanism by which SOPA accomplishes its questionable ends is through DNS censorship. However, this simply removes the IP address from the official DNS registry. Nothing is stopping somebody from setting up their own rogue DNS service that contains the IP addresses of all of the censored websites. Under SOPA this would be illegal, but so are pirate sites, and they exist. The problem with these third-party DNS services is not that they wouldn't work but rather that we have no reason to trust them. The DNS is already a fairly significant security risk for the Internet. Moving it outside of the regulation under which it currently operates only exacerbates this problem.

Imagine a scenario in which you type in "www.bankofamerica.com" in your browser. Because SOPA has mired the official DNS system down in a mess of litigation, you've decided to use a third-party DNS service. Instead of returning the true IP address of Bank of America, this service sends you to a truly rogue site whose only purpose is to steal your online banking credentials. Problem.

Perhaps the most terrifying thing about SOPA is not the future we face if it passes but the fact that it stands a chance of passing at all. I haven't been as scared as I was when reading transcripts of the House hearings since I watched the Exorcism of Emily Rose. The number of times a congressman made a tongue-in-cheek comment such as "I'm not a nerd" or "I don't know about this DNS stuff" would be comical if it weren't sad. The kicker here is that the congressmen refuse to call in expert testimony. Any decision-making body that simultaneously claims ignorance and refuses to listen to experts is broken.

Hopefully I've convinced you that SOPA will harm the Internet. However, I would be very interested in hearing why this bill is even being considered. Leave any insights in the comments. It would be much appreciated.

Friday, November 11, 2011

Why You'll Never be Able to Type Fast Enough

Composing a lengthy text message in my head takes all of half a second. Typing out a text message on the screen of my smart phone takes me twenty to thirty seconds. Before I had a smart phone, I might have taken thirty to forty seconds to tap out a message with predictive texting on a normal twelve key cell phone keyboard. The entire difference between the time it takes me to form a thought and the time it takes me to type it in is a waste attributable to a failure of technology. While input technology is improving, a massive amount of friction exists between man and machine--and that's not just inefficient. It's annoying. I'll call this 'friction' a bandwidth asymmetry between humans and computers.



The bandwidth asymmetry exists because computers can convey information to humans much more quickly than humans can provide input to computers. For instance, a fast typist can type at 120 words per minute while a fast reader can read at 300 to 500 words per minute. Keep in mind that reading text is one of the slowest ways that a computer can output information. Video and sound are both give us information faster than we could read it.

One logical solution is to simply allow computers to sense video and sound through cameras and microphones. In certain instances, this works very well. You're not going to find a better way of creating a movie than using a digital video camera. Except for purely electronic music, recording live music with a microphone is still the method used to create digital songs.

Why don't we use cameras and microphones to eliminate our bandwidth asymmetry? For one, computers are dumb when it comes to sound and video. Upload a movie to YouTube, and the computer is no closer to knowing whether it's looking at a baby monkey riding on a pig or a Katy Perry music video. Video just isn't a data format that computers understand naturally. The same goes for digital sound. However, the outlook isn't as bleak as I've painted it.

Recently, several products have come to market based on voice recognition. Android comes with voice search built in. More often, I use the voice recognition capabilities to dictate long text messages. Interestingly, the voice recognition does not take place locally on my phone. Rather, the sound signal is uploaded to Google's servers which perform the recognition process using machine learning. As an incurable early adopter, I'm biased, but I believe that the voice recognition quality has reached a level such that texting by dictation is faster and more convenient than using the keyboard--especially for long texts.

The latest installment of Apple's iPhone, the 4S, debuted the digital personal assistant named Siri. Siri listens to an iPhone user's voice and attempts to perform tasks as instructed--tasks such as checking the weather and sending text messages. Siri was the flagship feature of this iteration of the iPhone, and all indications show that it has been a success. Speaking to a phone in public might garner you sideways looks for the time being, but advances like Siri bring the day when this is no longer the case ever closer.

While the interface between humans and computers has become much more streamlined since the days of punch card input and output, communicating with a computer is far from an optimized task. Advances in speech and image recognition have recently broken down barriers to this communication. For now, speech recognition may be the best input method we have, but in order to completely eliminate the bandwidth asymmetry, something even more revolutionary has to come on the scene.

Saturday, July 9, 2011

David and Goliath: Is Google+ a Facebook Killer?

Over the past few days, the buzz surrounding Google's new social effort, Google+, has become inescapble. Google is executing the same strategy that successfully launched Gmail to prominence--Google+ is currently invite only. Though I see this strategy as little more than a gimmick (if you haven't found a Google+ invite by now, you're probably not reading this tech blog), it's an effective one. Not only does this invite system give the illusion of exclusivity, invites are a powerful form of social proof. If a good friend invites you to Google+, Google doesn't have to sell it to you--your friend already has. Exclusivity is one, often credited factor in Facebook's early success, and Google seems to be employing it with similar success.


Dismissing Google as a latecomer in a crowded market is all too tempting. Facebook is a firmly established monopoly, Twitter seems to be an unstoppable force, and nobody has replicated Skype's success in eight years. I'm here to make a strong prediction of Google+'s success. Let me explain.

Rather than struggling to catch established services in the social networking sphere, I think Google is benefitting from 8 years of observation in a previously non-existent market. After using Google+ for a week or so, it's easy to see the service as a distillation of several social services. The "stream" is obviously copied directly from Facebook, "hangouts" are a form of Skype on steroids, and "huddles" are a GroupMe clone. Of course, photo sharing, tagging, and commenting are features common to pretty much all social networks. Perhaps the best example of benefitting from others' experience is the feature that allows a user to view their own profile as any other person on the internet. This feature obviously benefits from Facebook's hard learned lessons about user privacy. Google has struggled mightily and publicly in the social networking market, but Google+ seems to be a well thought out rebuttal.

Beyond benefitting from experience and, frankly, copying many killer features of other platforms, Google+ benefits from some inherent advantages. I check my Gmail upwards of 87 times a day. I check facebook about 0.8 times a day. I don't think my usage habits are an anomally. The fact that the Google+ notification icon shows up in my Gmail inbox is an advantage that can't be overstated.

Further, the features that are unique to Google+, among social networks, seem to benefit from a lot of thought and social intelligence. The idea of designating a person as part of a particular "circle" is such a close analog of real life that I have no doubt it will be popular.

Google+ also benefits from synergies with Google's other products. The fact that I can check my email, keep my calendar, create collaborative documents, search the internet, and now use my social network all from one place is immenseley valuable to me. Combine that with the fact that Google's services integrate with my Android cell phone 23 times better than Facebook's, and I'm shamelessly rooting for Google+ to become the dominant social network.

Don't get me wrong, Facebook isn't going anywhere for a long time. That service is extremely popular--and for good reason. For the most part, Facebook innovates in the social networking sphere better than any other company.

Putting all strategic prognosticating aside, Google+ is a beautiful piece of engineering. Obviously benefitting from a Google-wide redesign effort, Google+ is minimalist and light but feels strong. If that's not the definition of good design, I don't know what is. The Android application is no different--standing in stark contrast to the "An error had occurred when fetching data. [null]" I get every other time I open Facebook's Android app.

Google seems to recognize the stakes in the social networking race and has responded with a strong entry. I'm excited to see how the service matures. "+1" for Google+.

Monday, June 20, 2011

Law & Technology on the Bleeding Edge: The Future of Google's Automated Cars

Another great guest post from TommyLoff. This time he's flexing his legal muscle.


- 44 Maagnum




Thanks to the fine people at Carpe Daemon (and perhaps a few other small news outlets) our readership (potentially 7 billion strong!) is fully aware of just how awesome Google’s automated car project is. Very. In the past six years, autonomous vehicles have gone from completing 7 of 150 miles in the 2005 DARPA Grand Challenge to an incident free 140,000 miles on California highways by a set of Google-enhanced Prius’.  News of this relatively secret project broke in the fall of 2010, and Google has since busied itself with a few nifty demos and lobbying efforts to establish the legal private and commercial use of their technology in the state of Nevada. The question, “Why Nevada?” remains unanswered but the notion of self-driving cars burning rubber down the Las Vegas strip is a world I very much want to live in.



Google’s lobbying has culminated in the presentation of two bills to the Nevada state legislature. The first would allow for the licensing and legal operation of autonomous vehicles on Nevada roads, and the second would exempt the operators of said vehicles from the statewide prohibition on texting while driving. These bills are likely to come to a vote in the next week before Nevada’s current legislative session comes to an end. Passage of these bills is certainly an exciting premise for tech enthusiasts and futurists alike; however danger lies in whether or not our legal system can cope with such rapid technological development.

Such legislation is based on Google’s assertion that self-driving cars will not only be safer, but also dramatically reduce infrastructure costs in our transit system by decreasing traffic delays and increasing fuel efficiency. These promises would result in an annual aggregate savings of well into the billions as well as hold the potential to save hundreds of thousands of lives annually. Gains however are not without costs, and the reality of yielding personal autonomy to lines of code will likely require a dramatic shift in our existing legal paradigms - particularly how we consider liability as a society.

Thankfully such a change will not happen with the flip of a switch. There will be a significant period in which humans share the road with a small minority of autonomous vehicles and only slowly, over many years, as the number of autonomous vehicles increases will the societal level benefits begin to become evident. Let’s begin with this transition period - one which we may officially be entering within days depending on the decision of the Nevada legislature. At the outset, liability will continue to fall on the vehicle operator with exceptional circumstances excluded. If somehow for instance the automated software prevented the driver from effectively controlling the vehicle thereby leading to a collision then perhaps product liability would come into play. For the most part however, existing doctrines of contributory and comparative negligence, which establish the criteria necessary to seek compensation in a civil suit, will continue to be used to assign liability to vehicle operators. (For the legal eagles among the readership, contributory negligence prevents compensation in a civil case if the Plaintiff is deemed to be even 1% liable. Comparative negligence attempts to establish what percent each party contributed to an accident and base settlements off of these figures.) While liability will continue to fall on the operator for the foreseeable future, legislation will continue to narrow the scope of this liability and laws governing the usage of automated technology would develop piecemeal on a state by state basis to reduce the culpability of individuals as they transition from vehicle operators to mere occupants. With the Nevada bill already pushing for an exemption from the state prohibition on texting for automated vehicle operators, it is clear that the envisioned end game is a system composed predominantly of automated vehicles in which vehicle occupants are divorced entirely from the vehicle’s operation.

This is where things fall apart. When the vehicle occupant loses culpability for the action of the vehicle, a new paradigm must emerge; particularly if we are operating under the assumption that automated cars still collide from time to time. Regardless of how dramatic a safety improvement we may see, the idea that automation can bring our annual highway death toll to zero is extremely far-fetched. Currently the highway death toll in the United States is roughly 400,000 annually. As a thought experiment let’s consider that a future system, with negligible human drivers, reduces this number to 100,000 deaths annually. Outwardly this is a very positive figure – yet one which lands us in a pickle. The only potentially liable party for motor vehicle accidents is now the product responsible for improving aggregate safety by 75%.  Unfortunately we are a society which thrives on the assignment of blame. If Google and other manufacturer’s automated systems are reducing American highway fatalities from 400,000 a year to 100,000 a year is it truly reasonable for them to bear the burden of 100,000 wrongful death claims despite preventing an additional 300,000? From both an ethical and practical standpoint it would appear that one must side Google. Such dramatic safety improvements should be rewarded not punished, and as a practical matter, if risk is no longer dispersed across the hundreds of millions of motor vehicle operators in the U.S. and instead consolidated onto a handful of corporations, the necessary insurance premiums for these corporations could threaten the existence of the entire industry. Yet the idea that wrongful deaths would be written off as a necessary evil of our new and improved transit system is preposterous. As troubling as financial compensation for loss of life or well-being is, even more troubling is no financial compensation and I for one am somewhat at at loss for a solution. Luckily our courts and legislatures will have at least some time and intermediary steps prior to reaching a new equilibrium.

Obviously there is a long road ahead for the implementation of automated vehicles and there are tremendous obstacles ahead. For one, as demonstrated by Will Smith in I, Robot, people really don’t like the idea of robots doing people things; especially when it involves giving up personal autonomy. We are irrational beings, and just as the vast majority self-assess themselves as above average drivers so too will many cling to the belief that in a dangerous situation they would prefer to be behind the wheel as opposed to a machine. Thus it is unclear whether or not the ‘cool factor’ alone will be enough to drive initial implementation. That said in recent years it has not been wise to bet against technological advancement, nor against Google. Their demos, combined with the seriousness of their efforts towards bringing their product to market in Nevada and the sheer size of the market they could potentially capture speak to the legitimacy of Google’s operation. That there is dramatic uncertainty regarding the legal ramifications of such technology should be no barrier towards progress and concerns that we do not have a framework in place to equitably resolve every potential outcome ought not be used as grounds to limit progress. This is particularly true in this situation as the exact changes necessary to our legal framework won’t become evident until some level of implementation is achieved. Stepping beyond the boundaries of the autonomous car, the point at which technology and law meet on the bleeding edge must be approached with flexibility in mind. As we move forward, it is the adaptability not the predictive value of the law with which our legislatures and courts should concern themselves.

Tuesday, June 14, 2011

Rails vs Force.com: Web App Smackdown

I've had my head buried pretty deeply in both the Ruby on Rails framework and the Force.com platform lately. I'm going to take a breather an try to gain some perspective on how the two fit together. At first glance, most would dismiss the two as completely different beasts. I know I did. Upon closer inspection, I've found the two to be much more similar than I first expected.



First things first. What's a framework? A framework is a piece of software that eliminates the busy work from creating an application. Ruby on Rails is a framework for creating web apps. By leveraging a framework like Rails, a developer can develop and deploy applications much faster than they would be able to otherwise. A framework like Rails contains hundreds of thousands of lines of code (my rough estimate.) This is all code that the developer doesn't have to rewrite when they are making their application.

There is a subtle distinction between a framework and a platform. Both exist in pursuit of the same goal, but each accomplishes it in a different manner. A platform, in the case of Force.com, is a whole package deal. This means that the creators of Force.com provide everything from the tools to develop applications to the hosting of the applications themselves. Force.com applications exist solely on the Force.com infrastructure. In contrast, a Rails app could be hosted on Heroku (owned by the same company as Force.com), Amazon Web Services, or even a personal laptop. These differences mean that each is optimized for a certain application, but first I'll talk about the similarities.



The glaring similarity between the two methods is the model-view-controller architecture. Roughly, the model defines the structure of the data, the view describes how the data is shown to the user, and the controller handles all of the logic the connects the two. Although MVC is a powerful paradigm, not all frameworks implement it... not by a long shot. The fact that both Rails and Force.com both follow the MVC architecture is a huge similarity that cannot be ignored.

Both Force.com and Rails make getting up and running with a simple database backed application almost trivial. The Rails scaffold generator literally requires one line of code to create an entire application (two if you count the initialization of the app.) Not to be outdone, Force.com allows semi-technical users to do the same thing with point and click configuration--requiring just slightly more time. Data backed applications are a pervasive reality, and both of these tools have completely solved the problem of their creation.

I know exactly the state that I've put you in, and I apologize. Right now you're wringing your hands and wiping sweat off your forehead because you're trying to decide whether you're going to use Rails or Force.com's platform. I've gone to some effort to highlight the similarities, but the fact remains that they are very different tools and should each be used in different circumstances.

Rails gives the developer a much finer level of control over their application--for starters, you can host it with someone other than its creator. Rails is by far more flexible. In fact, you could change any part of Rails because it is an open source project i.e. the source code can be obtained for free. This makes Rails ideal for creating web apps that might not be like anything created ever before. Of course, this flexibility comes at a price.

Force.com excels in some areas that Rails isn't meant to optimize--especially in business applications. Data backed business applications follow a fairly uniform pattern. The objects in question and the relationships between them are easy to describe. Force.com has done an incredible job of standardizing the process of business application creation. I think of Force.com as a specialized set of LEGOs. It certainly isn't the best solution for every problem, but if it's the right set of LEGOs for the job, you aren't going to find an easier or better tool. Thankfully for the creators of Force.com, there is no shortage of businesses that need software for which the platform is well suited.

I've developed with Rails for some time now, and I've grown to love the framework. Ruby is one of my favorite languages, gems couldn't be easier, and it's one of the New and Interesting frameworks that everyone is talking about. When I first started developing for Force.com, I had a negative visceral reaction to the circa 2001 styling and copious links. How could anyone prefer this clunky, plodding platform to the flashy, agile Rails? While I haven't yet started worshiping at the Force.com altar (and I have no plans to), it's grown on me to the point that I no longer have an alergic reaction the minute I log on.

Tuesday, June 7, 2011

Twitter Gets Tenure

I'm proud to bring you a guest post by one of the greatest historical minds of our time--TommyLoff. Versed in such diverse topics as the IRA and the latest internet memes, TommyLoff a welcome addition to the Carpe Daemon roster.


- 44 Maagnum





Every tweet since Twitter’s 2006 debut has made its way into the Library of Congress. That’s every contribution to the twittersphere you, I, Charlie Sheen, and the BronxZooCobra has ever made in addition to whatever all those other people have happened to tweet over the years. Yes, this includes the alleged twitpic confirmed twitpics of Democratic Representative Anthony Weiner's wiener. In consideration of just how staggering this volume of data is, a quick glance at stats provided by Twitter for March 2011 provides some more or less unfathomable numbers to ponder. While it took over three years to reach the first billion tweets, currently about 140 million tweets are let loose every day. What took three years now takes a week. Even more staggering is the fact that in the one month span of March, 2011 nearly half a million new twitter accounts were created daily. Shocking. As of April, 2010 Twitter had transferred over 100 terabytes of material to the Library of Congress - a figure which has likely more than doubled by now. Presumably, the fact that the Library of Congress is going to the trouble of archiving this material implies that someone somewhere views this data as an invaluable resource for academics of the Internet age: present and future. Let’s take a closer look at what future hordes of grad students might glean from our tweets shall we?
The real time impact of Twitter, Facebook and social media as a whole has been demonstrated repeatedly. Tweets helped fuel global indignation regarding the 2008 Iranian election as well as provide stimulus and information not only to rebels on the ground, but the outside world as to the events unfolding across the Middle East in the past six months. Such considerations certainly played heavily on the United Nations’ decision to declare Internet access a human right and the attempted denial of Internet access a breach of unalienable personal liberties. Certainly historians studying these dramatic global events will turn to Twitter in their analyses, however what of social historians gleaning tidbits on the life and times of the burgeoning Internet age?
Without doubt Twitter places the historian in the off the cuff, uncensored mind of the individual. In terms of real time, immediate reactions to events Twitter is unparalleled. The greatest battle the future historian will face is the sheer enormity of the data set, yet it is a data set born in the digital age. Hundreds of billions of tweets are in fact far more manageable than the present reality of historical materials scattered in microfilm and loose-leaf across the globe in University and National Archives.  A few keystrokes into a search engine and our future historian is well on his way. With this in mind, if not for our children’s historians but our children’s children’s historians, for goodness sakes keep hashtagging! #psa That Twitter is free, public and searchable means that legally speaking graduate students have every right to peruse our various online indiscretions, yet it seems unlikely that Twitter’s academic value lies in the musings of the individual. As much as I would like to believe my tweets provide invaluable insight into the world as it exists around us, it is nevertheless a petty and shallow medium. 140 characters simply cannot carry any depth and that is certainly a critical element of Twitter’s meteoric rise to prominence. That said, if tweets lack the depth necessary to be effective source material our future historian must turn elsewhere for value - the remarkable means by which Twitter reflects global cultural significance. A trait which will only continue to develop as Twitter’s demographics continue to broaden.

Without further ado, queue case study:

Bro-Patriotism and the Death of Osama bin Laden

Easily my favorite twitter metric is tweets per second. As a ready made metric of global cultural impact nothing like it has ever been seen. For the first time ever there is an available comparison between the cultural relevance of Lady Gaga’s meat costume and let’s say President Obama’s goals for improved Israeli-Palestinian relations. Needless to say - Gaga wins out in the end. And ultimately Gaga will always win out unless going head to head with Osama Bin Laden: cultural champion of the Twitter age. Of course more recent events are weighted more heavily in a purely numerical metric like tweets per second given the ever increasing number of twitter users, however it is safe to say that the death of Osama bin Laden struck a serious chord within our collective psyche.
The metric itself is remarkably precise. As the handy chart demonstrates, significant announcements over the course of the evening of May 1, 2011 correlate perfectly with peaks in the data. (The hourly peaks are the bi-product of auto-tweeting updates) Incidentally the evening of May 1, 2011 represents the highest sustained period of tweets ever. In the one hour span from 11pm til 12am, tweets easily surpassed 4000 per second. Roughly 15 Million tweets in one hour composed largely of uninspired “USA! USA! #binladen” derivatives. Certainly not the place for dramatic insight into the importance and consequence of the event at hand yet what can be taken away? 


Did the death of Osama Bin Laden reveal the inner bro-patriot in every American? A glimpse perhaps into a shared American common ground apart from our rather dramatic political differences? Or do the tweets reveal nothing more than the cathartic release of shared loss through a relatively anonymous medium where it’s alright to be a bit douchey? Arguments can and will be made on all these points yet hopefully will be recognized as objectively incomplete without drawing from broader source material. The most effective use of Twitter with regards to the historical record for the time being  is its ability to draw attention to the fact that, “Holy shit people really cared about Osama, Beiber, Obama, *generic celebrity gaffe*, etc.” While the digital record will revolutionize the study of History, it will not be done on the basis of Twitter alone. We are each leaving an unprecedented written record scattered across the digital ether. Albeit this legacy is one fraught with privacy and accessibility issues, it must not be discarded for the low hanging fruit. Twitter is of course not merely a historical curiosity of its own ends but  a powerful tool with it’s greatest attribute being accessibility. Yet in reality it is a mere drop in the ocean relative to the material locked away in servers and hard drives across the globe.  The trick will be to access these materials amidst the changing dynamics of privacy within the Internet age. For now however Twitter is free, public and searchable and by all means the most must be made out of this burgeoning research not only in the study of history but throughout the social sciences. #neverstoplearning

Saturday, June 4, 2011

Living and Working in the Cloud

Almost every day I hear the term "cloud computing" used, and about half the time it's used incorrectly. Massive amounts of confusion seem to surround the concept. I often hear refrains along the lines of "Cloud computing is the future." To me, this statement is so obvious as to be meaningless. Saying cloud computing is the future is a little bit like saying "Mint toothpaste is the future." Mint toothpaste is so obviously useful that it will certainly be used for years to come. Of course mint toothpaste is the future. However, mint toothpaste is also the present and the past. Cloud computing is here, has been here, and is here to stay.



Put simply, cloud computing is the practice of using computers located off site to accomplish a given computing task. Rarely do you use the Internet without taking advantage of cloud computing. Sure, the term is somewhat nebulous, and gradations of appropriateness exist. Reading your webmail isn't as clearly cloud computing as storing a Google doc in their datacenters. However, both could be safely called cloud computing.

A little history will go a long way. Before the Internet was invented in 1969, it's safe to say that cloud computing did not exist. Yes, computers were connected into networks, but these networks were almost exclusively contained within one institution--whether a company or a university or another entity. These networks did not have great geographical span, so cloud computing wasn't really possible. Shortly after the Internet came into being, cloud computing still didn't exist. Computers communicated--sometimes over large distances--but this communication consisted mainly of fetching static web pages. Eventually people realized that heavy duty computing tasks could be offloaded to more capable remote computers. Enter Salesforce.com.

Traditionally, computer software is written on one computer, copied to some storage medium (a CD), and sold to be run on a different computer. We'll call this "shrink wrapped software"--though celophane isn't a requirement. Salesforce.com shattered this mold when they introduced the revolutionary software as a service (SAAS) paradigm in 1999. This term is roughly synonymous with cloud computing. Rather than requiring a flat fee for ownership of a piece of software, software as a service typically charges a monthly fee for the use of a piece of software that is delivered over the Internet. Salesforce.com pioneered this model with their best in class customer relationship management software. The SAAS landscape has simply exploded since 1999.

The cloud computing domain is broken up into two major camps--cloud platforms and cloud services. You will be very familiar with the cloud services--Salesforce.com, Twitter, Facebook, Gmail, Dropbox, etc. These are the technologies that have so thoroughly and seamlessly infiltrated daily life that we take them for granted. You may not have heard of even the most popular cloud platforms--Force.com, Amazon Web Services (AWS), Heroku (now owned by Salesforce.com), Google App Engine, Rackspace, and others. Cloud platforms provide computing time for rent. AWS literally rents processors by the minute. They go so far as to maintain an on demand spot market for computing time.

The story of how some major cloud platforms came to be is an interesting one. We'll take Amazon.com as an example--though Salesforce.com and Google's stories are almost identical. As Amazon experienced massive growth in their e-commerce business throughout the 1990s, they built massive data centers almost as fast as they could. Bringing a datacenter online is not a trivial task, and sometimes these companies carried excess computing capacity as they grew into their new infrastructure. What better way to monetize this infrastructure than to rent it out? Companies like Amazon developed incredibly complex and effective systems for allowing other companies to rent their computers.

The cloud platforms provide the infrastructure that enables the vast majority of websites to operate. Rather than hosting their websites on premises (an error prone and risky proposition), sites such as Reddit, Yelp, and others pay Amazon to host the site remotely. Reddit's core competency is not web hosting, so they do well to outsource that portion of their business to professionals like Amazon. Cloud platforms are a happy byproduct of the explosion in cloud services.

Cloud computing is a natural extension of the Internet. The metaphor of power generation is an apt one. Once, power was generated and used locally--think windmills used for irrigation and hydro powered textile mills. Eventually, electricity generation was centralized in a few large power plants and distributed via the power grid. This process entailed economies of scale that reduced the cost of electricity. Cloud computing is no different. Whether you bring AWS instances online like it's your job or you drop the occasional Twitter bomb, you're part of the computing revolution known as cloud computing. This may be true of some more than others, but we are all living and working in the cloud.

Tuesday, March 29, 2011

Robots go to School: What Does Machine Learning Mean?

Six hours and 54 minutes after embarking on a grueling march over the the Mojave Desert, Stanley, the driver less Volkwagen Toureg stormed accross the finish line in the 2005 DARPA Grand Challenge. Charged with completing the difficult 132 mile desert obstacle course, the modified SUV rose to the challenge. DARPA (the Defense Advanced Research Projects Agency) issued a challenge to engineers and computer scientists in 2004 to have a driver less vehicle complete a long and difficult course consisting of GPS way points spaced at about the length of a football field. The prize? $2 million. You may be wondering how a robot could be programmed to make the complex, instantaneous decisions that off-road driving requires. The short answer is machine learning (which, incidentally, sounds like the title of another Carpe Daemon series).

Stanley in his victory pose

Indeed, this is the first in a multi-part series about machine learning. You may have heard the term carelessly thrown around in many different contexts. That probably isn't a symptom of underinformed people trying to sound smart but rather a display of how ill-defined the topic is. In general, machine learning refers to algorithms that allow computers to make inferences based on past experience. A program starts out performing a task passably well, then it does the task some more, and then all of the sudden it can do the task better... well, that's machine learning. A more rigorous definition than that doesn't really exist.

You probably encounter tons of examples of machine learning every day. When you feed your hand written check into the ATM and it (somewhat creepily) tells you how much it's worth... that's machine learning. When you go to the grocery store and receive a discount for using your "club card"... that discount was probably computed using machine learning techniques. If you have an Android phone and you use the voice search feature... you guessed it, that's machine learning. When that important email from your professor or boss is diverted into the spam folder only to be discovered long past its expiration date... that's machine learning (or lack thereof). After recognizing such diverse applications of machine learning, it's hard to fathom how all of these things could fall under the umbrella of machine learning.

Machine learning is not one algorithm or even one branch of math or science. It's an aggregation of many different techniques borrowed from statistics, optimization, computer science, and even neurology. It is a set of tools borrowed from all of these fields and codified into algorithms that can be implemented on computers. By bringing the techniques of all these fields to bear on difficult problems, programmers can endow computers with the ability to perform tasks one might not expect them to complete.

In recent months, we have seen some powerful demonstrations of the capabilities of machine learning. First, Watson triumphed over the best human Jeopardy! players. This feat required an understanding of the nuanced English language that many never expected computers to achieve. Fresh off Watson's victory in Jeopardy, Google granted increased visibilty into its semi-secret project to create autonomous cars by releasing this video. Where did Google get all of this autonomous car knowledge? If you guessed the DARPA Grand Challenge, then you guessed correctly. Sebastian Thrun, the leader of the victorious 2005 Stanford team and his colleagues now work at Google.

These feats are somewhat jarring because they represent an encroachment of computers into domains previously dominated by humans. Let's just say that their inexorable march is not finished. Researchers will continue to improve machine learning algorithms, and computers will gain ever more aptitude in surprising domains. I love stoking robot apocolypse hysteria as much as the next person, but surprisingly that's not my point here. My point is that machine learning is an incredibly interesting and powerful tool that is applicable accross an expanding number of domains. Something so powerful must be worth understanding... at least a little bit.

If Watson and Google's driverless car weren't enough to get you excited about upcoming installments, then I guarantee this will be. (Watch until the end. It's worth your two minutes).

Thursday, March 17, 2011

Broadening the Definition of Technology

When I say technology, you think of iPhones, hydrogen powered cars, and the space shuttle. Valid, all of them. Gadgets, vehicles, and exotic materials are all obvious examples of technology. However, there is another whole class of knowledge that also classifies--knowledge of processes.

Technology

In his 1999 New Yorker article titled "When Doctors Make Mistakes," Dr. Atul Gawande uses his experience as a surgeon to make a general claim about technology. Peppered with stories of medical malpractice, Gawande's article makes clear the fact that no matter how expert the doctor, room always remains for error. He contends that the main thrust of medical training should not be to protect us from bad doctors but rather from good ones. The sheer volume of cases that pass through the hands of competent, well-intentioned doctors each year almost guarantees that a "silly" mistake will have dire consequences.

Gawande's prescription for this malady is somewhat unorthodox. Rather than proposing increased training or harsher malpractice statutes, he suggests that more foolproof protocols be devised.

Every year, several surgeons accidentally operate on the wrong side. Intending to repair a ligament in the left knee, they instead slice open the right. Obviously, this is rare, but it seems that it should be eliminated all together. Clever doctors invented a foolproof method for ensuring that the surgeon operates in the correct place. The American Academy of Orthopedic Surgeons now endorses the practice of surgeons initialing the body part on which to operate before the patient comes to surgery.

The article contains myriad other examples of processes that reduce the already infrequent incidence of errors in the medical profession. Anything from standardizing the direction that anaesthesia knobs are turned to reduce the flow of gas to decision trees used by emergency room doctors can be considered technology. Following several successes in this arena, Gawande wrote a book titled The Checklist Manifesto. He argues that no matter how expert a professional is, checklists can improve his or her outcomes.

His book brings to mind perhaps the most famous checklist in common use--the pilot's preflight checklist. Before taking off, pilots must complete an extensive checklist intended to eliminate common oversights and ensure the plane's ability to fly safely. This checklist is one of the reasons that air travel is one of the safest modes of travel. It is also a prime example of a process that doubles as a piece of technology.

Another excellent example of an industry in which processes are part of the technology, ironically, is the software development industry. Authors, bloggers, and columnists have pontificated ad nauseum about the value of disciplined software development practices... and they are absolutely justified. Sloppy or lazy development can multiply the final amount of effort required to build a piece of software by a factor of ten or more--something I have experienced.

Frederick Brooks, Jr's book The Mythical Man Month is perhaps the canonical resource on the topic. Written in 1975, the fact that it still ranks #7 in Amazon's entire software engineering category is a testament to the durability of its wisdom. Some aspects of the book are dated (e.g. weighting the size of the source code of a program heavily because storage was so expensive), but the basic thesis is enduring. He argues that adding extra man-power to any behind-schedule project that involves complex interdependencies will only cause the project to fall further behind. The overhead involved in bringing new members up to speed and maintaining communication among the enlarged team is insurmountable.

Brooks's book is an example of a piece of knowledge that was invented by man, solves a problem, and makes life easier. If that isn't technology, then I don't know what is. This fact requires a broadening of our conception of technology to include all manner of managerial, medical, and professional processes. In addition to writing about technology, the work of Gawande and Brooks should be considered technology in and of itself.

Wednesday, March 16, 2011

Nuclear Crisis in Japan: Implications for the Future of Nuclear Power

Ever since the 9.0 earthquake rocked Japan, non-stop coverage of the tragedy has inundated the media. Satellite images show barren wastelands where vibrant cities stood before, photographs of survivors show grieving Japanese citizens, and urgently worded articles speculate about the effect the catastrophe will have on global financial markets. However, one story seems to stand above the cacophony. Several reactors at the Fukushima I nuclear power plane currently sit precariously on the edge of meltdown, and the media seems unable to focus on anything else.

Fukushima I power plant

The vacuum of knowledge about nuclear power generation among the general public makes the situation at the Fukushima power plant perfect fodder for hysteria. Headlines flippantly compare the unfolding events to the Chernobyl disaster of 1986--widely considered the worst nuclear accident in history. However, futher reading among more sober sources reveals the fact that the current situation is not on the same scale and has little potential for progressing to that point.

You're probably wondering (as I was) just exactly what is happening at the Japanese power plant. Details have become more scarce throughout the past day or so, but this is what is known. When the earthquake originally struck, the electric pumps responsible for pumping cooling water through the reactors were knocked offline. The pumps are critical for the safe operation of the power plant because they prevent the reactor cores from overheating and succumbing to a runaway chain reaction. When these pumps fail--as they did in this case--the reactor is at risk of a "meltdown" a little understood term with myriad sinister connotations.

A nuclear reactor meltdown occurs when the core temperature spikes from its norm around 550 degrees Fahrenheit to several thousand--hot enough to melt the fuel rods and destroy them. Under these extreme conditions, the fuel rods can melt to the bottom of the steel and concrete enclosure that houses them. It was once thought that the fuel could melt through the bottom of the enclosure "all the way to China"--a scenario referred to as "the China syndrome." Previous meltdowns (Chernobyl and Three Mile Island) have shown that the fuel rods are not hot enough to accomplish this. Thus, a meltdown is a dangerous event and a disaster for the owner of the reactor, but in general it is not a global health threat.

What makes the Japanese reactors safer than those at Chernobyl? First, reactor design outside of Russia is much different than that found at Chernobyl. For instance, the control rods in reactor 4 of the Chernobyl power plant--the reactor responsible for the toxic explosion--were made out of graphite. When a small hydrogen explosion exposed the graphite control rods to the air, they ignited. The fire ejected massive amounts of radioactive material into the atmosphere so that it could be spread far and wide. Where the Chernobyl plant used graphite, the Fukushima plants use water--water doesn't burn. A second major design difference is the containment vessel. The containment vessels at the Fukushima plant are formidable where as Chernobyl had no containment vessel. Current indications from Japan are that the containment vessels at Fukushima are still intact.


Wreckage of Chernobyl power plant (reactor 4 centered)

A small dose of perspective goes a long way in ameliorating hysteria. Headlines blare such vague alarms so as to be meaningless e.g. "Japan: Nuclear plant emits radiation in the atmosphere." Breaking news: "So does your microwave." Not to make light of a serious situation, but such headlines, while good for garnering page views, are also good for inducing paranoia. We are exposed to radiation every day. The dangerous aspect of radiation is its intensity rather than its presence. The amount of radiation emitted by the Japanese plant so far is well short of a disaster and has almost no chance of reaching Chernobyl proportions.

The negative attention drawn to the nuclear power industry at large threatens Obama's agenda of battling America's addiction to oil with nuclear energy. An event such as the one we're witnessing in Japan certainly gives reason to reasses the health risks, but it should not halt a critical march toward energy independence. The safety concerns arising from the Japan incident, while relevant, are somewhat dampened by updates in reactor design. The reactors at the Fukushima plant are so called generation II reactors. The generation III reactors that would be built in the US operate using a passive cooling system thus eliminating the risk of failing cooling water pumps. These design updates are just one of the reasons that Obama, as recently as today, still includes $36 billion of loan guarantees in his budget for the construction of new nuclear power plants.

The earthquake and aftermath in Japan are truly tragic. The country is certainly in dire need of aid and rebuilding. I would even go so far as to say that the media attention helps ensure that the country receives the aid it so desperately needs. However, alarmist headlines focused on largely peripheral issues only serve to engender panic and perpetuate paranoia. Nuclear power is one of the few paths that leads us out of a politically and environmentally unsustainable dependence on oil. Sacrificing this progress in the name of attention grabbing headlines is tragic.

Tuesday, March 15, 2011

Silicon Valley(ish) Founders Quiz

I've heard a lot of rumblings out there lately. "Oh, I know every Silicon Valley entrepreneur who's ever lived," says one (almost certainly) overconfident technocrat. Another egomaniac claims, "At home they call me Peter Thiel because I'm at the center of the Palo Alto rumor mill." Well, here's your shot to show what you're made of.

Jobs and Wozniak immortalized

Below I've listed 15 rock star tech entrepreneurs of the last decade or so. I've also listed the companies that they founded. Match the entrepreneur with their company and you score a point. Read on after the quiz to see what your score means.

1. Biz Stone
2. Larry Ellison
3. Bill Gates
4. Steve Wozniak
5. Mark Pincus
6. Larry Page
7. Jack Dorsey
8. Elon Musk
9. David Karp
10. Pierre Omidyar
11. Mark Zuckerberg
12. Steve Jobs
13. Sergey Brin
14. Eli Harari
15. Max Levchin


A. SanDisk
B. Twitter
C. Tumblr
D. Square
E. PayPal
F. Microsoft
G. Apple
H. Facebook
I. Oracle
J. Google
K. Zynga
L. eBay

Ok, now that you've written your responses down, I've got to kill some screen space before I give you the answers. Here are the scoring ranges and their respective merit levels.

0-3: You've had your head in the sand... or worse. I can't decide whether this is tragic, inexcusable, or both.
4-6: Not exactly up on your tech news, but at least you know that Bill Gates founded Microsoft. More Carpe Daemon definitely needs to be on your agenda.
7-10: You're getting there. Maybe you get your news from the technology section of the Wall Street Journal or the New York Times. If you get brave, try to widen your purview to some well regarded blogs like TechCrunch and Engadget.
11-14: You're up to date on Silicon Valley happenings. Perhaps you're an aspiring founder trying to learn from the pros?
15-16: Yes, 16. Attuned readers noticed that Jack Dorsey indeed founded not one, but two companies on the list. I'm impressed... and honored to have you among the Carpe Daemon readership.

Whether you scored a 2 or a 16, the only way to get better (assuming you even care about these things I spend my life obsessing over) is to practice. I already mentioned a couple blogs that will keep you up to speed. You could also follow some of the founders I mentioned on Twitter. Whatever your course of action, I hope you enjoyed the quiz.


Answers:
1. B
2. I
3. F
4. G
5. K
6. J
7. B, D
8. E
9. C
10. L
11. H
12. G
13. J
14. A
15. E

Thursday, March 10, 2011

The Art of Computer Programming: Programming Languages

Today I'm bringing you the second installment in an ongoing series intended to demystify the practice of computer programming. Last time, I wrote about the utility of a good text editor to a programmer. Though relatively useless to the average person, a good text editor is indispensable for a developer. Why do programmers need such a good text editor? Most of a developer's day is consumed by writing source code in a particular programming language, and the ability to do so painlessly and efficiently can have a huge impact on productivity. This brings me to today's topic--programming languages.

Java is one of the most commonly used languages today

"Programming languages" share half their name with "natural languages." Indeed, the two are fundamentally in the same business, that is, communicating information between two parties in a mutually understandable form. In the case of programming languages, the two parties are not humans, but rather a human and a computer. When one needs to communicate with a friend from overseas, one learns a foreign language. Similarly, when a programmer wishes to convey information to a computer, he or she learns a programming language. (A minor difference exists in that programmers usually regale computers with mind-numbingly repetitive instructions rather than study-abroad stories.)

You might be wondering why we need programming lanugages in the first place. Why can't we just describe what we want the computer to do in English? Natural languages are imprecise, which is a problem in critical situations. The same sentence can have multiple meanings depending on interpretation, not to mention inflection, body language, and other context. As we saw on Jeopardy a few weeks ago, the state of the art in natural language processing is not nearly capable of handling nuanced requests. Watson confused Toronto for a US city which is relatively harmless. What if Watson confused your bank account for someone else's or thought you wanted to send email to Margaret your paramour rather than Margaret your boss?

We've established the need for programming languages, so the next logical step is putting our heads together and coming up with one perfect method of communicating with machines--if only. Wikipedia lists somewhere between several hundred and a thousand programming languages. Granted, most of these languages have long been extinct, but a dizzying array of languages are still in common use. The reason for this technological Tower of Babel is the diversity of applications for programming languages. For example, PHP, Python, and Ruby are considered "internet" languages because they include a lot of features that enhance the process of web development. C and C++ are languages that are used when a programmer needs to optimize a program for speed. Java is a mainstay in the corporate and mobile spheres due to its security and the ability for programs written in the language to be run on a wide range of different computers. Still other languages, such as ML, are considered academic languages and are used more for study than application. (Yes, the high frequency trading firm, Jane Street, uses a variant of ML, but no this does not qualify as wide non-academic use.)

Learning a programming language can be a fun and intellectually stimulating experience. The most important part is resisting the urge to throw up your hands in the first five minutes and say, "This is impossible." If you had had the option, you probably would have done the same thing when you were learning English. Plenty of languages offer a gentle introduction to programming. For instance, Javascript is already installed on your computer and can be run in your browser without downloading any new software. If you get brave enough, Google a tutorial and give it a shot.

Tuesday, March 8, 2011

The Next Big Steps for Egypt

I’m excited to bring you Carpe Daemon’s third guest blogger. Michael Maag is a 2009 graduate of Princeton University. He currently lives in New York and works for an Internet advertising analytics company. He’s a four minute miler--which doesn’t qualify him to write about economics. However, his economics degree does. Read on to see what he thinks the next big step is for Egypt.

- 44 Maagnum


There are two unemployment rates in Egypt depending on your level of education: 5% and 17%. If you are an Egyptian who has decided to invest the time and money necessary to obtain a college education, you will probably be disappointed to find you’ve earned yourself an unemployment rate more than 3x that of your illiterate countrymen. Yes, you read that correctly. College educated Egyptians are 3x as likely as illiterate Egyptians to be unemployed . This is a problem.


So why is it that Egypt’s most productive workers sit idle while day laborers, chambermaids, and dish washers find work like it’s America in the 90’s? It turns out, the answer probably has something to do with Egypt’s recent economic success and the government’s reinvestment of that success in education.

In last week’s New Yorker article (cleverly) entitled “Prophet Motive”, John Cassidy examines the interplay - apparent tension, even - between Islam and economic prosperity. Nations with predominately Muslim populations by and large remain stuck on the bottom rung of the economic ladder, but Cassidy is careful to acknowledge the exceptions to the rule - namely Turkey and Egypt. The governments in Turkey and Egypt (97% and 90% Muslim, respectively) have both managed to shepherd their countries to robust economic growth in recent years. The economic expansion itself is good, but the the recency and alacrity of said expansions have combined to produce some serious growing pains. In Egypt’s case, one of the pain points is a mismatch between the supply of educated labor coming from the universities and the demand for that labor coming from corporate sector.

To Egypt’s credit, the government has been taking a good portion of it’s recent economic winnings and investing them in education, producing an ever larger cohort of college educated 20 somethings. With the preponderance of evidence indicating that educated labor will continue to win higher and higher wages relative to uneducated labor (see: income inequality in America), the Egyptian government’s investment is likely to pay big dividends in the long run. Only problem is, right now, kids are graduating and have no place to work. Egypt simply hasn’t had the time to build a large enough corporate sector to absorb all the educated labor the country’s education system is churning out. So today, Egypt has a large, educated, and very frustrated younger set that’s clamoring for the Egyptian government to lend a hand.  

How the Egyptian government decides to do so is a huge question mark. Given the populist conflagration presently raging, the impulse to saddle the government with the entire burden of recovery is surely strong. Of all the institutions in Egypt, the government is probably the one poised to act most swiftly, but it would likely prove unable to sustain high employment for educated workers deep into the future (see: Greece). Thankfully, Egypt has other options that will stimulate demand for educated labor in the near term and beyond.

This is where technology comes in (and, conveniently, where this post becomes contextually relevant). Egypt needs to encourage - through the usual means like low taxes, free trade, simple laws surrounding incorporation, forgiving bankruptcy law, etc - an influx of foreign investment. If Egypt can’t build it’s own corporate sector overnight, the next best thing is to have a developed country install an outpost of it’s own. And remember, we’re not talking about Nike building a shoe factory that will employ the rural poor; we’re talking about college graduates working lab and desk jobs. By doing so, the educated and unemployed gain employment doing fulfilling work at a skill level the native industry has not had the time to reach.... yet. The immediate effects of foreign investment are good, but the real beauty of this strategy is manifest in the inevitable scenario where the employees of those foreign firms, after learning the tricks of the trade, grow weary of working for IBM Egypt (or Caterpillar Egypt, Toyota Egypt, SIEMENS Egypt...you get the idea) and decide to take a crack at doing their own thing. On this road Egypt gets to have its cake and eat it too: a high quality boost to employment today and an enduring tack towards a native industrial compliment to their strong educational system.

Ultimately, the employment problem in Egypt is a good sign - it wouldn’t be happening unless Egypt was progressing. Now that the Egyptian people have their scalp, it’s time to get to work building the foundations of an economy designed to employ the skilled labor Egypt already has in plentiful supply.

Monday, March 7, 2011

Monday Mailbag: iPad Puts on a Suit and Goes to Work

I'm excited to introduce you to a new feature that will be running on Carpe Daemon. It's the Monday mailbag! From time to time, I'm asked questions about technology that I think would be great blog posts. The Monday mailbag will just formalize the process open the floor to everyone. I think I know the burning question that's on everyone's mind right now--where do I send my fan mail... uh.. I mean questions?!?! And the answer to that excellent question is 44maagnum@gmail.com


The other day I was asked how technology is changing the medical field. This is an area where I readily admit that I'm no expert (one of the few). However, I do have a few thoughts. This TechCrunch article details an iPad app called DrChrono that doctors can use during patient appointments to take notes and write prescriptions. DrChrono represents a huge leap forward for a few reasons. Most importantly, it allows wizened doctors to look hip using a chic new Apple gadget, but also allows doctors to keep electronic medical records without the unnecessary and cumbersome intermediate step of writing them down on paper.

Electronic medical records have been one of Obama's priorities since he took office. He claims they will eliminate billions of dollars in health care costs--more than relevant coming from a president whose healthcare plan is decried as unsustainably expensive. In addition to saving money, electronic medical records can help prevent doctor error. For example, when a doctor writes a prescription, software can check a patient's history for problems regarding the specific medication or similar medications. Software programs may have a poor bedside manner, but they are adept at sifting through large amounts of patient history and medication information looking for potential problems. The examination room isn't the only professional workplace the upstart iPad is making noise.

While we're talking about taking our iPads to work, let's talk about the FAA's recent approval of using the iPad in place of paper flight maps. A particular charter jet operator has recently gained FAA approval to stop using paper maps and rely solely on the iPad. The company went through a strenuous approval process, and apparently their success does not mean that other companies have the same approval. However, it certainly sets a strong precedent, and I'm sure that many other companies and airlines will follow in their footsteps. Using the iPad instead of paper maps just makes too much sense.

Though these are just two examples of the iPad moving into the workplace, they are representative of a larger trend of using the iPad for activities other than entertainment. Apple bills the iPad as a device for consuming media and online content because the consumer market is their focus. However, the ability for third party app developers to create applications for professionals allows the iPad to be used in novel settings. Apple probably never imagined pilots relying on the iPad in the cockpit, but they did have the foresight to open the door to developers who could make it a reality.

It's time to give all those nagging technology questions a voice. Why not just email them to 44maagnum@gmail.com right no