Games with in-app purchases offer valuable life lessons for kids and developers

 

gemstl;dr Mobile games with “bags of gold” in-app purchases teach kids and developers that:

  • if you have deep pockets, you don’t need to be smart or work hard to win
  • without money, skill and hard work won’t get you anywhere. And no second chances.
  • building honest games is its own reward. Don’t expect to be making any money on top of it.

Who could argue with that?

As a developer, I enjoy letting my children try out new iPad games. I tell them to avoid the ones that sell “chests of coins” or “bags of gems”, because you need to pay to keep playing and that’s not how honest apps should work.

But recently, after someone told me about a certain game that is raking a million dollar a day, and has the tech press drooling all over it, I decided to perform an experiment. I installed the game for my eldest daughter and asked her to see how far she could get before having to buy anything. Just two hours later, her ragged, penniless and underarmed villagers were facing certain death in a desperate attempt to take over their neighbor.

Maybe it was a matter of skill. After all, my daughter had just discovered the game. Maybe the next attempt would be more successful? No such luck, we could not find the “restart” button. Even deleting and reinstalling the app did not work! The developers clearly took great care to make sure one could not enjoy the free game too much.

Renting digital media instead of buying it is nothing new. We don’t purchase ebooks, TV shows, music or movies. We just rent them. But at least we know in advance how much it is going to cost us. With a pay-to-play mobile game, we know that the more we enjoy it, the more expensive it gets. I don’t know if it is even possible to “win” at those games, and I am not going to find out.

To be clear, I have nothing against in-app-purchases that unlock specialized features, subscriptions, or are substitutes for free trials or even paid upgrades. But $99 for a chest of gems, seriously?

So Apple, you have repeatedly shown that protecting your customers was a high priority. Personally, I find a scam much more offensive than some nudity or political speech, and I don’t think I am alone. If you must censor the latter, can you please do something about those pay-to-play games? After all, the base version of an app is supposed to be fully functional, and the guidelines for In-App purchases prohibit “intermediary currencies”. The recently-added warning “Offers In-App Purchases” is not enough, since all the App Store lists are infested.

I know that many developers (mostly indie) despise that practice. Yet, among the hundreds  of “app discovery” web sites, I could not find a single one devoted to regular purchase-once-and-play-forever games. Maybe it is time for a “honest games alliance” or something?

 

 

 

 

Facebook and Twitter are bad for the economy.

Thin IceI don’t mean the lost productivity of workers who update their status on their employer’s dime. I mean that the new small and large companies that our economies badly need just won’t be built on top of Twitter or Facebook.

Many people have made the same points before, much better than I could, but Anil Dash’s excellent piece encouraged me to post this anyway.

Facebook and Twitter provide tremendous value to their users. In just a few years those two companies have become a large part of the Internet. Facebook has redefined how people share information with family, friends and acquaintances, and pushed everyone to share a lot more. For many, Facebook is the Internet. At the same time, Twitter has managed to become the platform of choice for real-time news, personal broadcasting, aggregation, curation and more. I remember reading sci-fi novels that described the rise of a global consciousness, or hive mind, depending on whether you see it as positive or a negative. Call me naive but I believe Twitter is the first credible step in that direction. So congratulations guys! But sorry, this stuff is way too important to be left to just two companies.

Twitter’s new API rules make it abundantly clear that if you are an entrepreneur looking to build a sustainable business on top of an API that you don’t control you are severely deluded. But it is not just about Twitter clients or even about Twitter. Facebook and Twitter, to name just those two, are struggling to find business models and revenue to justify their valuations. Even when they do find the revenue, if startup X comes along and manages to be profitable doing Twitter search or Facebook analytics (for example) how long will it take before it is squeezed out? An API that used to be free will suddenly have to be paid for, or it will be so restricted as to become useless, while the provider of the API replicates the money-making functionality. If they are lucky and if they play nice, startup X may be rewarded with an acquisition. Niche businesses and some companies built for a quick flip will do well, but large independent ones? I don’t think so.

Now, before someone points out that Twitter or Facebook can do whatever the hell they want with their API, since they built it with their investors’ money and they provide a free service to users, let me say I wholeheartedly agree. No one except their shareholders has any right to tell them how to run their business. It’s not that they are evil either. The problem  is there is nothing they can do to give external developers the guarantees they would need. The earlier they stop pretending developers have nothing to worry about the better.

Can you imagine businesses the size of Google or eBay or Amazon being built on America Online or Compuserve? Well, that won’t happen on Twitter or Facebook either, and that’s precisely the problem.

Luckily, AOL and Compuserve’s walled gardens have crumbled and the Internet jungle has taken over. Anyone can now create a web site for the entire (free) world to visit without asking anyone’s permission. An email can reach any one of the billions of Internet users regardless of their email client or internet service provider. Blogs are similarly easy to create and to access using standard technologies.

Switching to another service with a similar centralized model is clearly not the solution (sorry, App.net). I believe it is in everyone’s best interest to create and start using open standards to share status updates, location check-ins, photos, videos, news stories, links, upvotes, downvotes, questions, answers, product reviews, witty remarks, lolcats or what you have had for lunch. Either privately or publicly.

How do we go about that? Brent Simmons had a very interesting suggestion: third-party Twitter clients should add support for an open alternative and offer the option to their users to publish on both. It would make it really easy and painless to switch and would help the new service reach critical mass. Is that the reason why Twitter is cracking down on client apps? Do they fear they could be too easily left out of the loop?

Now public sharing is the easy part. There is nothing fundamentally different between a Tweet and a 140-character blog post (OK, real-time issues are not trivial). I also believe that public sharing will give rise to the most revolutionary new applications (if not global consciousness, at least some breakthroughs in scientific research and governance).

Private sharing is a stickier problem. How do you broadcast information on a peer-to-peer network but make sure that only the people you have selected can access it? How do you revoke those permissions? By definition, you don’t control the software running on other nodes than yours, so you cannot assume that they are well behaved and will just comply to your requests. That is an interesting challenge for computer scientists and cryptographers. But if it is possible to create a secure distributed crypto-currency (like bitcoin), I am hopeful that a secure sharing system is within reach too.

If a true peer-to-peer system is too hard, how about old-fashioned interoperability? You know, post on one network, let your friends see your update on another service. Sadly, both Facebook and Twitter are moving away from that, taking their users hostage in their turf wars.

It is tragic that Google has chosen to create their own proprietary social network (Google+) to compete with Facebook and Twitter. It’s like they time-travelled to 1990 and decided to build a competitor to AOL instead of helping create the open web. They had (and still have) the technical talent, the resources, and I believe the right motivation to make it happen.

If instead of peddling Google+ to everyone, Google put its weight behind a truly open and Internet-friendly sharing system, it would become more successful than Google+ (or Facebook) will ever be. Please Google? Or Yahoo!? Here is your chance to shine again!

(image credit: morguefile.com)

Why Siri had to start in beta

Bashing Siri, the iPhone 4S virtual assistant, seems to be fashionable these days.  Mat Honan declares it “Apple’s broken promise“. CNN reports on Siri’s alleged anti-abortion bias (via Danny Sullivan). Colbert weighs in. John Gruber remarks how weird it is for Apple’s flagship new product to be “so rough around the edges”, yet notes that it will be easier to improve voice recognition while it’s being widely used.

It’s not just easier, it’s the only way!

I worked on speech recognition with IBM Research for nearly six years. We participated in DARPA-sponsored research projects, field trials, and actual product development for various applications: dictation, call centers, automotive, even a classroom assistant for the hearing-impaired. The basic story was always the same: get us more data! (data being in this case transcribed speech recordings). There is even a saying in the speech community: “there is no data like more data“. Some researchers have argued that most of the recent improvements in speech recognition accuracy can be credited to having more and better data, not to better algorithms.

Transcribed speech recording are used to train acoustic models (how sound waveforms relate to phonemes), pronunciation lexicons (how do people actually mis-pronounce words, specially people and place names), language models (spoken phrases rarely conform to the English grammar), and natural language processors. And that for each supported language! More training data means the recognizer can handle more variations in voices, accents, manners of speech, etc. That’s undoubtedly why Nuance for example offers a free dictation app.

It is tempting to consider Siri as some kind of artificial intelligence, who, once trained properly, can answer all sorts of questions.  The reality is that it is a very complex patchwork of subsystems, many of which handcrafted.

To improve Siri, engineers must painstakingly look at the requests that she could not understand (in all languages!) and come up with new rules to cope with them. There are probably many, many gaps like “abortion clinic” in the current implementation, which will be fixed over time. When Apple states “we find places where we can do better, and we will in the coming weeks”, they are plainly describing how this process works.

It is important to understand that unlike Apple’s hardware and app designs, Siri’s software could not have been fine-tuned and thoroughly tested in the lab prior to a glorious release. It had to be released in its current form, to get exposure to as much variability as possible all the way from the acoustics to the interpretation of natural language. For each of the funny questions that Apple’s engineers had anticipated, poor Siri has to endure a hundred others.

If the rumors of a speech-enabled Apple TV are true, then Siri will soon have other challenges. For example, far-field speech recognition is notoriously more difficult than with close-talking microphones. She had better take a head start with the iPhone 4S.

 

[UPDATE There has been a lot of interest in the article, I thought I would clarify a few things]

-I have no inside information. Everything I wrote about Siri is an educated guess based on my own experience. I may be totally wrong, and I probably missed some important parts of the story.

-I did not mean to imply that Siri’s system is rule-based. I am convinced that it relies heavily on statistical learning. But someone has to train, fine-tune, test and debug statistical algos with new data and new use cases. Sometimes you just throw in the new data and press the “retrain” button. Sometimes you have to dive in and adapt algorithms. And sometimes, in order to squeeze the last few percentage points, you may write some old-fashioned rules, like for Siri’s quirky replies.

-As a few commenters pointed out, Apple has already gathered a lot of data from the previous Siri app. I think they used it to build the best system they could, which is already quite impressive IMO. They had to release it to be able to go even further. New data brings diminishing returns: at some point, 20% or 50% more data is insignificant, you want 10x or 100x more.

Why I am grateful to Apple and Steve Jobs (a developer’s perspective)

I fell in love with computers at 14.  My dad had taken me to the local community center.  They had a “computer initiation” class on a bulky Commodore CBM system.  Some people proudly brought their own Sinclair ZX-81 along.  I had never imagined that anything like that was possible.

Soon I had my own computer, a TI-99/4A, and I taught myself to program in BASIC, then in Assembler.  I mostly made games and traded them on tapes, by mail, with the TI99 club.  I have been programming and tinkering with computers since then, at work and at home.

There was a lot of excitement about computers, countless user groups and programming magazines.  The first word processor, the first spreadsheet, the first CAD program, the first graphical interface were all invented.  New software and hardware companies would pop up like mushrooms.  There was an incredible sense of opportunity and of endless possibilities.

After that?  Not so exciting stuff.  We got laptops, then netbooks, cheaper and crappier every year.  Bloated and buggy operating systems.  Browser wars.  Viruses. Enterprise applications that, even in 2011, would be better suited to a green 80×24 terminal. Write-once, run-ugly-anywhere software. Crash-prone smartphones so difficult to use that only computer geeks and corporate email addicts could be bothered.

The only bright spot? The Internet and web applications made computers even more useful, and brought some excitement back.

Most innovation happened in the browser, because regular users were wary of installing any third-party software.  With byzantine installation procedures, compatibility issues and rampant malware, I can’t blame them.

Then came the iPhone and the iPad.  Developers could again craft innovative software and put it directly in front of millions of users.  And a networked, pocket computer packed with sensors sure opens the door for plenty of innovation!

Customers now try and buy many apps because it is so simple and because they are reassured that it is safe.  Indie developers do not have to worry about distribution, billing, payment processing or returns, they can focus on what they do best: design and coding.

Apple has led the way and raised the bar for the whole industry.  Google and Microsoft have had to come up with their own platforms.  There is now a healthy competition that was sorely missing during the previous decade(s).

But most of all, Apple has restored a sense of wonder for what computers (in the form of smartphones and tablets) can do.  It has brought back the excitement and the endless possibilities that motivated so many software developers like me thirty years ago.

Thank you Apple, and thank you Steve for that.

[image: Jonathan Mak]