Skip to content
Tags

Entropica Claims “Powerful New Kind of AI”

May 11, 2013

A new start-up called Entropica is claiming to have discovered mathematical equations that allow an autonomous system to select and achieve its own goals.

Entropica is a powerful new kind of artificial intelligence that can reproduce complex human behaviors, including the ability to autonomously set and implement its own goals.  In this video, we will see how Entropica can walk upright, use tools, cooperate, play games, make useful social introductions, globally deploy a fleet, and even earn money trading stocks, all without being told to do so.

Here’s the full pitch video:

Sounds great, doesn’t it?  But unfortunately, there’s a lot here that causes my hype detector to peg the red zone.  Let’s take the first claim: “Entropica can walk upright.”  As humanoid robot hobbyists, we know what an impressive feat that is; you have to coordinate a large number of actuators, maintain balance while moving forward, shift weight from side to side, deal with possible slopes or small obstacles, and so on.  Entropica can do all that?  Well, no.  What’s actually shown is Entropica solving the classic pole-balancing problem.  That’s a control problem that’s easy to solve with any of a dozen different techniques, and has almost no relationship to walking at all.

OK, how about making useful social introductions?  A robot that can say hello, introduce itself, and exchange a bit of personal information would indeed be pretty neat, especially if it did so without being specifically programmed to do that.  But again, no, that’s not what’s shown here.  Instead they show Entropica adding links in an acyclic graph, while other links are randomly removed.  Again, there are other fairly trivial solutions to that same problem.

Now, not to be a complete curmudgeon, it is normal and expected for new approaches to first cut their teeth on toy problems.  (It’s not all that normal to overhype the results of such toy-problem tests this much, at least on this side of the pond, however.)  It’s entirely possible that the authors (A. D. Wissner-Gross of Harvard and MIT, and C. E. Freer of Hawaii) really have found a useful new approach to solving problems.  The basic idea — that an agent should make choices that maximize its future options — makes some sense as a general principle.  Whether it scales up to nontrivial problems remains to be seen.

The full paper is behind a paywall at Physical Review Letters; and for more curmudgeonly analysis, see this New Yorker article.

From → News, Opinion, Videos

7 Comments
  1. Erik Frebold permalink

    Not behind a paywall at the moment:

    Click to access PhysRevLett_110-168702.pdf

  2. Thanks Erik, that’s a great find!

  3. IDK permalink

    When i scimmed ur article it seems that u didnt get the point……the machine is not being told to make those actions…….its deciding on its own to make an optimal future for its self, calculating the repercussions of each of its possible actions.

  4. Yeah, IDK, I got that. But the same can be said for ANY machine learning algorithm, or even non-adaptive algorithms — anything with an evaluation function. The point is, the toy problems solved have also been solved by lots of other algorithms, and were far short of the claims made in the video. So, we’ll just have to wait and see if future results put some teeth behind those extraordinary claims.

  5. Jim Hayes permalink

    i think everyone is completely missing the cosmological implications of the formula

  6. Chris permalink

    The point of it isn’t that in can solve problems. The point is it knows what problems to solve. It wasn’t about solving the toy problems, but about discovering them out it’s own.

Trackbacks & Pingbacks

  1. ENTROPY – THE NEXT HEURISTIC FOR AI – part-2 | The True Man

Leave a comment