Monday, June 20, 2011

Law & Technology on the Bleeding Edge: The Future of Google's Automated Cars

Another great guest post from TommyLoff. This time he's flexing his legal muscle.


- 44 Maagnum




Thanks to the fine people at Carpe Daemon (and perhaps a few other small news outlets) our readership (potentially 7 billion strong!) is fully aware of just how awesome Google’s automated car project is. Very. In the past six years, autonomous vehicles have gone from completing 7 of 150 miles in the 2005 DARPA Grand Challenge to an incident free 140,000 miles on California highways by a set of Google-enhanced Prius’.  News of this relatively secret project broke in the fall of 2010, and Google has since busied itself with a few nifty demos and lobbying efforts to establish the legal private and commercial use of their technology in the state of Nevada. The question, “Why Nevada?” remains unanswered but the notion of self-driving cars burning rubber down the Las Vegas strip is a world I very much want to live in.



Google’s lobbying has culminated in the presentation of two bills to the Nevada state legislature. The first would allow for the licensing and legal operation of autonomous vehicles on Nevada roads, and the second would exempt the operators of said vehicles from the statewide prohibition on texting while driving. These bills are likely to come to a vote in the next week before Nevada’s current legislative session comes to an end. Passage of these bills is certainly an exciting premise for tech enthusiasts and futurists alike; however danger lies in whether or not our legal system can cope with such rapid technological development.

Such legislation is based on Google’s assertion that self-driving cars will not only be safer, but also dramatically reduce infrastructure costs in our transit system by decreasing traffic delays and increasing fuel efficiency. These promises would result in an annual aggregate savings of well into the billions as well as hold the potential to save hundreds of thousands of lives annually. Gains however are not without costs, and the reality of yielding personal autonomy to lines of code will likely require a dramatic shift in our existing legal paradigms - particularly how we consider liability as a society.

Thankfully such a change will not happen with the flip of a switch. There will be a significant period in which humans share the road with a small minority of autonomous vehicles and only slowly, over many years, as the number of autonomous vehicles increases will the societal level benefits begin to become evident. Let’s begin with this transition period - one which we may officially be entering within days depending on the decision of the Nevada legislature. At the outset, liability will continue to fall on the vehicle operator with exceptional circumstances excluded. If somehow for instance the automated software prevented the driver from effectively controlling the vehicle thereby leading to a collision then perhaps product liability would come into play. For the most part however, existing doctrines of contributory and comparative negligence, which establish the criteria necessary to seek compensation in a civil suit, will continue to be used to assign liability to vehicle operators. (For the legal eagles among the readership, contributory negligence prevents compensation in a civil case if the Plaintiff is deemed to be even 1% liable. Comparative negligence attempts to establish what percent each party contributed to an accident and base settlements off of these figures.) While liability will continue to fall on the operator for the foreseeable future, legislation will continue to narrow the scope of this liability and laws governing the usage of automated technology would develop piecemeal on a state by state basis to reduce the culpability of individuals as they transition from vehicle operators to mere occupants. With the Nevada bill already pushing for an exemption from the state prohibition on texting for automated vehicle operators, it is clear that the envisioned end game is a system composed predominantly of automated vehicles in which vehicle occupants are divorced entirely from the vehicle’s operation.

This is where things fall apart. When the vehicle occupant loses culpability for the action of the vehicle, a new paradigm must emerge; particularly if we are operating under the assumption that automated cars still collide from time to time. Regardless of how dramatic a safety improvement we may see, the idea that automation can bring our annual highway death toll to zero is extremely far-fetched. Currently the highway death toll in the United States is roughly 400,000 annually. As a thought experiment let’s consider that a future system, with negligible human drivers, reduces this number to 100,000 deaths annually. Outwardly this is a very positive figure – yet one which lands us in a pickle. The only potentially liable party for motor vehicle accidents is now the product responsible for improving aggregate safety by 75%.  Unfortunately we are a society which thrives on the assignment of blame. If Google and other manufacturer’s automated systems are reducing American highway fatalities from 400,000 a year to 100,000 a year is it truly reasonable for them to bear the burden of 100,000 wrongful death claims despite preventing an additional 300,000? From both an ethical and practical standpoint it would appear that one must side Google. Such dramatic safety improvements should be rewarded not punished, and as a practical matter, if risk is no longer dispersed across the hundreds of millions of motor vehicle operators in the U.S. and instead consolidated onto a handful of corporations, the necessary insurance premiums for these corporations could threaten the existence of the entire industry. Yet the idea that wrongful deaths would be written off as a necessary evil of our new and improved transit system is preposterous. As troubling as financial compensation for loss of life or well-being is, even more troubling is no financial compensation and I for one am somewhat at at loss for a solution. Luckily our courts and legislatures will have at least some time and intermediary steps prior to reaching a new equilibrium.

Obviously there is a long road ahead for the implementation of automated vehicles and there are tremendous obstacles ahead. For one, as demonstrated by Will Smith in I, Robot, people really don’t like the idea of robots doing people things; especially when it involves giving up personal autonomy. We are irrational beings, and just as the vast majority self-assess themselves as above average drivers so too will many cling to the belief that in a dangerous situation they would prefer to be behind the wheel as opposed to a machine. Thus it is unclear whether or not the ‘cool factor’ alone will be enough to drive initial implementation. That said in recent years it has not been wise to bet against technological advancement, nor against Google. Their demos, combined with the seriousness of their efforts towards bringing their product to market in Nevada and the sheer size of the market they could potentially capture speak to the legitimacy of Google’s operation. That there is dramatic uncertainty regarding the legal ramifications of such technology should be no barrier towards progress and concerns that we do not have a framework in place to equitably resolve every potential outcome ought not be used as grounds to limit progress. This is particularly true in this situation as the exact changes necessary to our legal framework won’t become evident until some level of implementation is achieved. Stepping beyond the boundaries of the autonomous car, the point at which technology and law meet on the bleeding edge must be approached with flexibility in mind. As we move forward, it is the adaptability not the predictive value of the law with which our legislatures and courts should concern themselves.

Tuesday, June 14, 2011

Rails vs Force.com: Web App Smackdown

I've had my head buried pretty deeply in both the Ruby on Rails framework and the Force.com platform lately. I'm going to take a breather an try to gain some perspective on how the two fit together. At first glance, most would dismiss the two as completely different beasts. I know I did. Upon closer inspection, I've found the two to be much more similar than I first expected.



First things first. What's a framework? A framework is a piece of software that eliminates the busy work from creating an application. Ruby on Rails is a framework for creating web apps. By leveraging a framework like Rails, a developer can develop and deploy applications much faster than they would be able to otherwise. A framework like Rails contains hundreds of thousands of lines of code (my rough estimate.) This is all code that the developer doesn't have to rewrite when they are making their application.

There is a subtle distinction between a framework and a platform. Both exist in pursuit of the same goal, but each accomplishes it in a different manner. A platform, in the case of Force.com, is a whole package deal. This means that the creators of Force.com provide everything from the tools to develop applications to the hosting of the applications themselves. Force.com applications exist solely on the Force.com infrastructure. In contrast, a Rails app could be hosted on Heroku (owned by the same company as Force.com), Amazon Web Services, or even a personal laptop. These differences mean that each is optimized for a certain application, but first I'll talk about the similarities.



The glaring similarity between the two methods is the model-view-controller architecture. Roughly, the model defines the structure of the data, the view describes how the data is shown to the user, and the controller handles all of the logic the connects the two. Although MVC is a powerful paradigm, not all frameworks implement it... not by a long shot. The fact that both Rails and Force.com both follow the MVC architecture is a huge similarity that cannot be ignored.

Both Force.com and Rails make getting up and running with a simple database backed application almost trivial. The Rails scaffold generator literally requires one line of code to create an entire application (two if you count the initialization of the app.) Not to be outdone, Force.com allows semi-technical users to do the same thing with point and click configuration--requiring just slightly more time. Data backed applications are a pervasive reality, and both of these tools have completely solved the problem of their creation.

I know exactly the state that I've put you in, and I apologize. Right now you're wringing your hands and wiping sweat off your forehead because you're trying to decide whether you're going to use Rails or Force.com's platform. I've gone to some effort to highlight the similarities, but the fact remains that they are very different tools and should each be used in different circumstances.

Rails gives the developer a much finer level of control over their application--for starters, you can host it with someone other than its creator. Rails is by far more flexible. In fact, you could change any part of Rails because it is an open source project i.e. the source code can be obtained for free. This makes Rails ideal for creating web apps that might not be like anything created ever before. Of course, this flexibility comes at a price.

Force.com excels in some areas that Rails isn't meant to optimize--especially in business applications. Data backed business applications follow a fairly uniform pattern. The objects in question and the relationships between them are easy to describe. Force.com has done an incredible job of standardizing the process of business application creation. I think of Force.com as a specialized set of LEGOs. It certainly isn't the best solution for every problem, but if it's the right set of LEGOs for the job, you aren't going to find an easier or better tool. Thankfully for the creators of Force.com, there is no shortage of businesses that need software for which the platform is well suited.

I've developed with Rails for some time now, and I've grown to love the framework. Ruby is one of my favorite languages, gems couldn't be easier, and it's one of the New and Interesting frameworks that everyone is talking about. When I first started developing for Force.com, I had a negative visceral reaction to the circa 2001 styling and copious links. How could anyone prefer this clunky, plodding platform to the flashy, agile Rails? While I haven't yet started worshiping at the Force.com altar (and I have no plans to), it's grown on me to the point that I no longer have an alergic reaction the minute I log on.

Tuesday, June 7, 2011

Twitter Gets Tenure

I'm proud to bring you a guest post by one of the greatest historical minds of our time--TommyLoff. Versed in such diverse topics as the IRA and the latest internet memes, TommyLoff a welcome addition to the Carpe Daemon roster.


- 44 Maagnum





Every tweet since Twitter’s 2006 debut has made its way into the Library of Congress. That’s every contribution to the twittersphere you, I, Charlie Sheen, and the BronxZooCobra has ever made in addition to whatever all those other people have happened to tweet over the years. Yes, this includes the alleged twitpic confirmed twitpics of Democratic Representative Anthony Weiner's wiener. In consideration of just how staggering this volume of data is, a quick glance at stats provided by Twitter for March 2011 provides some more or less unfathomable numbers to ponder. While it took over three years to reach the first billion tweets, currently about 140 million tweets are let loose every day. What took three years now takes a week. Even more staggering is the fact that in the one month span of March, 2011 nearly half a million new twitter accounts were created daily. Shocking. As of April, 2010 Twitter had transferred over 100 terabytes of material to the Library of Congress - a figure which has likely more than doubled by now. Presumably, the fact that the Library of Congress is going to the trouble of archiving this material implies that someone somewhere views this data as an invaluable resource for academics of the Internet age: present and future. Let’s take a closer look at what future hordes of grad students might glean from our tweets shall we?
The real time impact of Twitter, Facebook and social media as a whole has been demonstrated repeatedly. Tweets helped fuel global indignation regarding the 2008 Iranian election as well as provide stimulus and information not only to rebels on the ground, but the outside world as to the events unfolding across the Middle East in the past six months. Such considerations certainly played heavily on the United Nations’ decision to declare Internet access a human right and the attempted denial of Internet access a breach of unalienable personal liberties. Certainly historians studying these dramatic global events will turn to Twitter in their analyses, however what of social historians gleaning tidbits on the life and times of the burgeoning Internet age?
Without doubt Twitter places the historian in the off the cuff, uncensored mind of the individual. In terms of real time, immediate reactions to events Twitter is unparalleled. The greatest battle the future historian will face is the sheer enormity of the data set, yet it is a data set born in the digital age. Hundreds of billions of tweets are in fact far more manageable than the present reality of historical materials scattered in microfilm and loose-leaf across the globe in University and National Archives.  A few keystrokes into a search engine and our future historian is well on his way. With this in mind, if not for our children’s historians but our children’s children’s historians, for goodness sakes keep hashtagging! #psa That Twitter is free, public and searchable means that legally speaking graduate students have every right to peruse our various online indiscretions, yet it seems unlikely that Twitter’s academic value lies in the musings of the individual. As much as I would like to believe my tweets provide invaluable insight into the world as it exists around us, it is nevertheless a petty and shallow medium. 140 characters simply cannot carry any depth and that is certainly a critical element of Twitter’s meteoric rise to prominence. That said, if tweets lack the depth necessary to be effective source material our future historian must turn elsewhere for value - the remarkable means by which Twitter reflects global cultural significance. A trait which will only continue to develop as Twitter’s demographics continue to broaden.

Without further ado, queue case study:

Bro-Patriotism and the Death of Osama bin Laden

Easily my favorite twitter metric is tweets per second. As a ready made metric of global cultural impact nothing like it has ever been seen. For the first time ever there is an available comparison between the cultural relevance of Lady Gaga’s meat costume and let’s say President Obama’s goals for improved Israeli-Palestinian relations. Needless to say - Gaga wins out in the end. And ultimately Gaga will always win out unless going head to head with Osama Bin Laden: cultural champion of the Twitter age. Of course more recent events are weighted more heavily in a purely numerical metric like tweets per second given the ever increasing number of twitter users, however it is safe to say that the death of Osama bin Laden struck a serious chord within our collective psyche.
The metric itself is remarkably precise. As the handy chart demonstrates, significant announcements over the course of the evening of May 1, 2011 correlate perfectly with peaks in the data. (The hourly peaks are the bi-product of auto-tweeting updates) Incidentally the evening of May 1, 2011 represents the highest sustained period of tweets ever. In the one hour span from 11pm til 12am, tweets easily surpassed 4000 per second. Roughly 15 Million tweets in one hour composed largely of uninspired “USA! USA! #binladen” derivatives. Certainly not the place for dramatic insight into the importance and consequence of the event at hand yet what can be taken away? 


Did the death of Osama Bin Laden reveal the inner bro-patriot in every American? A glimpse perhaps into a shared American common ground apart from our rather dramatic political differences? Or do the tweets reveal nothing more than the cathartic release of shared loss through a relatively anonymous medium where it’s alright to be a bit douchey? Arguments can and will be made on all these points yet hopefully will be recognized as objectively incomplete without drawing from broader source material. The most effective use of Twitter with regards to the historical record for the time being  is its ability to draw attention to the fact that, “Holy shit people really cared about Osama, Beiber, Obama, *generic celebrity gaffe*, etc.” While the digital record will revolutionize the study of History, it will not be done on the basis of Twitter alone. We are each leaving an unprecedented written record scattered across the digital ether. Albeit this legacy is one fraught with privacy and accessibility issues, it must not be discarded for the low hanging fruit. Twitter is of course not merely a historical curiosity of its own ends but  a powerful tool with it’s greatest attribute being accessibility. Yet in reality it is a mere drop in the ocean relative to the material locked away in servers and hard drives across the globe.  The trick will be to access these materials amidst the changing dynamics of privacy within the Internet age. For now however Twitter is free, public and searchable and by all means the most must be made out of this burgeoning research not only in the study of history but throughout the social sciences. #neverstoplearning

Saturday, June 4, 2011

Living and Working in the Cloud

Almost every day I hear the term "cloud computing" used, and about half the time it's used incorrectly. Massive amounts of confusion seem to surround the concept. I often hear refrains along the lines of "Cloud computing is the future." To me, this statement is so obvious as to be meaningless. Saying cloud computing is the future is a little bit like saying "Mint toothpaste is the future." Mint toothpaste is so obviously useful that it will certainly be used for years to come. Of course mint toothpaste is the future. However, mint toothpaste is also the present and the past. Cloud computing is here, has been here, and is here to stay.



Put simply, cloud computing is the practice of using computers located off site to accomplish a given computing task. Rarely do you use the Internet without taking advantage of cloud computing. Sure, the term is somewhat nebulous, and gradations of appropriateness exist. Reading your webmail isn't as clearly cloud computing as storing a Google doc in their datacenters. However, both could be safely called cloud computing.

A little history will go a long way. Before the Internet was invented in 1969, it's safe to say that cloud computing did not exist. Yes, computers were connected into networks, but these networks were almost exclusively contained within one institution--whether a company or a university or another entity. These networks did not have great geographical span, so cloud computing wasn't really possible. Shortly after the Internet came into being, cloud computing still didn't exist. Computers communicated--sometimes over large distances--but this communication consisted mainly of fetching static web pages. Eventually people realized that heavy duty computing tasks could be offloaded to more capable remote computers. Enter Salesforce.com.

Traditionally, computer software is written on one computer, copied to some storage medium (a CD), and sold to be run on a different computer. We'll call this "shrink wrapped software"--though celophane isn't a requirement. Salesforce.com shattered this mold when they introduced the revolutionary software as a service (SAAS) paradigm in 1999. This term is roughly synonymous with cloud computing. Rather than requiring a flat fee for ownership of a piece of software, software as a service typically charges a monthly fee for the use of a piece of software that is delivered over the Internet. Salesforce.com pioneered this model with their best in class customer relationship management software. The SAAS landscape has simply exploded since 1999.

The cloud computing domain is broken up into two major camps--cloud platforms and cloud services. You will be very familiar with the cloud services--Salesforce.com, Twitter, Facebook, Gmail, Dropbox, etc. These are the technologies that have so thoroughly and seamlessly infiltrated daily life that we take them for granted. You may not have heard of even the most popular cloud platforms--Force.com, Amazon Web Services (AWS), Heroku (now owned by Salesforce.com), Google App Engine, Rackspace, and others. Cloud platforms provide computing time for rent. AWS literally rents processors by the minute. They go so far as to maintain an on demand spot market for computing time.

The story of how some major cloud platforms came to be is an interesting one. We'll take Amazon.com as an example--though Salesforce.com and Google's stories are almost identical. As Amazon experienced massive growth in their e-commerce business throughout the 1990s, they built massive data centers almost as fast as they could. Bringing a datacenter online is not a trivial task, and sometimes these companies carried excess computing capacity as they grew into their new infrastructure. What better way to monetize this infrastructure than to rent it out? Companies like Amazon developed incredibly complex and effective systems for allowing other companies to rent their computers.

The cloud platforms provide the infrastructure that enables the vast majority of websites to operate. Rather than hosting their websites on premises (an error prone and risky proposition), sites such as Reddit, Yelp, and others pay Amazon to host the site remotely. Reddit's core competency is not web hosting, so they do well to outsource that portion of their business to professionals like Amazon. Cloud platforms are a happy byproduct of the explosion in cloud services.

Cloud computing is a natural extension of the Internet. The metaphor of power generation is an apt one. Once, power was generated and used locally--think windmills used for irrigation and hydro powered textile mills. Eventually, electricity generation was centralized in a few large power plants and distributed via the power grid. This process entailed economies of scale that reduced the cost of electricity. Cloud computing is no different. Whether you bring AWS instances online like it's your job or you drop the occasional Twitter bomb, you're part of the computing revolution known as cloud computing. This may be true of some more than others, but we are all living and working in the cloud.