Tag: AI

  • Artificial Intelligence: Fact vs Fiction

    Artificial Intelligence: Fact vs Fiction

    One of the most misunderstood terms in computer science is “artificial intelligence”. While many people are familiar with the term artificial intelligence, or its shortened form, AI, they might have a picture of AI in their mind that doesn’t reflect the reality. Sci-fi movies paint a picture of an AI as simply a human-like intelligence that lives in a computer. That’s not entirely accurately.

    Today we’re looking a bit more closely at real artificial intelligence initiatives, how they differ from pop culture depictions of AI, and some of the ethical and philosophical questions raised by artificial intelligence technology.

    What is Artificial Intelligence?

    In simple terms, artificial intelligence refers to intelligence being displayed by machines, in contrast to the natural intelligence displayed by humans and other animals. In the popular conception of an artificial intelligence, the term refers to machines that can mimic natural intelligence features such as problem solving, learning and innovation.

    Within the scientific community, there is an ongoing effect known as the “AI effect.” This observation states that any functionality once thought to be “artificial intelligence” that becomes achievable by current-day machines is no longer dubbed AI. For instance, tasks such as understanding human speech, playing games like chess and Go and decrypting written language were all once reserved for “artificial intelligence,” though they are now common computer programs.

    In short, as Tesler’s Theorem jokes, “AI is whatever hasn’t been done yet.”

    Types of AI

    There are three main types of artificial intelligence. These include analytical, human-inspired and, finally, humanized AI. Analytical AI is the simplest, and encompasses things like learning, problem-solving and understanding representations of the world around them. Human-inspired AI are more complex and would involve the understanding and emulation of human emotion. Essentially, these would be “emotionally intelligent” AI.

    Finally, humanized AI would most closely resemble the sci-fi incarnation of a human-like intelligence that can think, reason, emote and feel in all the same ways as a human being. Humanized AI, in theory, would be fully self-aware, cognizant, and, essential, would have all of the elements that make natural intelligence aware of their place in the world. This form of AI carries serious philosophical and ethical implications.

    Ethics and Philosophy

    Humanized AI raises a serious question: is a sufficiently intelligent computer program, one that shows evidence of self-awareness, a person? Should society extend human rights and legal protections to artificial intelligences? How should we react should the artificial intelligence prove hostile, or hold values contrary to those of its creator?

    Even deeper than these questions, there are serious philosophical questions about the nature of consciousness. We know we are conscious, or, at least, each individual can know that they are conscious. However, it’s difficult to distinguish a sufficiently well-programmed piece of software from a truly self-aware machine intelligence. How can we know that the program in question is actually experiencing consciousness, not just emulating the signifiers of consciousness we programmed into it?

    Reality vs Expectation

    The difference in the reality of artificial intelligence and the expectations of them have led to a number of miscommunications between researchers and their funding. Companies and universities funding AI research often expect fully-aware, sentient AI to leap fully-formed from the researchers’ computers, while the researchers are simply making iterative probes into the nature of machine learning and intelligence.

    In the short-term, it’s unclear if any of the software we currently have could be defined as “artificial intelligence,” due to the AI effect reclassifying innovations as simple machine processes, not intelligence. In the long-term, we will have a number of decisions to make regarding the future of artificial intelligence, how we as a species deal with machine intelligence, and what rights we extend to apparently self-aware programs.


  • Microsoft Build 2019: Biggest Headlines

    Microsoft Build 2019: Biggest Headlines

    Microsoft’s Build summit is a yearly developer’s conference held to unveil new features in Microsoft’s various initiatives. This year’s headlights weren’t focused on Xbox or Windows, the way one might expect. Instead, Microsoft focused on their Azure platform, deep-learning algorithms, mixed reality and all things artificial intelligence.

    Here are the biggest headlines from Microsoft Build 2019.

    Microsoft Build 2019

    AI is Here to Stay

    Microsoft is driving full-speed ahead with artificial intelligence. While the company is best-known for its Windows operating system, they’ve been focusing ever more and more on their artificial intelligence projects. Namely, Azure, their deep-learning software, has been a huge focus at the last few Build events.

    Initiatives like Azure Cognitive are focused on understanding audio and visual data, while voice recognition is another huge push for the company. The keywords here were machine learning and data interpretation. For the most part, it appears that Microsoft is focused on offering services for businesses, using machine learning to parse huge amounts of data.

    Microsoft’s in with Blockchain

    Microsoft announced their recent involvement with cryptocurrency at Build 2019. They unveiled the JP Morgan Ethereum ledger platform, Quorum, which was built using Microsoft Azure. However, blockchain tech is about more than just crypto. Microsoft announced they’d be using blockchain technology to help businesses form trustworthy ledgers built on the infallible nature of the chains.

    Mixed Reality

    Mixed Reality, or augmented reality, is a type of VR that allows for virtual constructs to be displayed alongside real-world objects. The most obvious applications, of course, are for video games. However, other uses for mixed reality could include virtual presentations of 3D blueprints, making it an ideal tech jump for drafters and designers.

    Linux for Windows

    The only big Windows news at the Build conference this year will likely be completely overlooked by the average user. The Windows Subsystem for Linux functionality was recently added in a Windows 10 update, which is a big deal for developers, thanks to Linux’s dev-friendly architecture. However, it’s unlikely that the average Windows user will have any need to mess around with Linux.

  • New Samsung DRAM Chip Set to Overhaul Smartphones

    New Samsung DRAM Chip Set to Overhaul Smartphones

    Processors in smartphones are receiving countless upgrades these days. Industry mainstay Qualcomm is ahead of the curve, producing chips for Android phones that rival some laptops. Apple’s A11 Bionic processor is like lightning on a chip, performing tasks with blistering speed. The newest entry into this smartphone race is Samsung, the Korean juggernaut that makes the Galaxy line of phones. Testing on their 8GB LPDDR5 DRAM line of new chips has begun in earnest. This new DRAM chip is slated to ship in AI-powered 5G smartphones.

    Less Power Consumption

    The biggest takeaway from Samsung’s marketing of this new DRAM chip is the low power consumption. In general, they promise that the chip is overall more efficient and will help save battery life. More importantly, though, the “deep sleep mode” offered by the chip will help ensure that battery consumption is low when the phone isn’t being used. Samsung claims this will lead to an average of 30 percent longer battery life for phones with these new DRAM chips.

    Surprising Power

    The 8GB LPDDR5 DRAM also boasts some impressive specs. The data rate is much faster than standard chips are currently, transferring around 51GB in a second. That’s a bit hard to visualize, so here’s an example: that’s about 14 HD video files per second. Which is, frankly, nuts.

    The combination of that high amount of transfer speed and low consumption will be critical for AI phones. AI will likely require large amounts of through-put for data as it parses information. Similarly, the always-learning programs will benefit from lower consumption of power and longer battery lives. This means that Samsung’s newest chips will likely give it a competitive edge as personal phones with AI companions become the industry standard. Programs like Alexa, Siri and Bixby will likely evolve alongside new chips in this vein, taking advantage of their strengths. The future of phones is AI, and Samsung has set themselves up for success in this field.

  • Google’s DeepMind AI Now Capable of Rendering Scenes

    Google’s DeepMind AI Now Capable of Rendering Scenes

    Google’s DeepMind neural network is now capable of rendering scenes. Not just any scenes, mind you, but complex ones. Using its neural networks and learning functions, DeepMind is capable of rendering hypothetical images it hasn’t seen before. While that might all sound rather abstract and hard to understand, it’s a huge leap for learning software. What exactly does this mean, and what effect will it have on AI going forward? 

    Rendering Scenes 

    The reason this is important, if somewhat boring-sounding at first, is that it represents a logical form of imagination. The AI is now capable of understanding a description of a geometrical scene, rendering it, and then rendering it from angles it has neither been showed nor had described. This is something humans do already, and easily.  

    So easily, in fact, that you’re likely overthinking what is being described. If you see an image on a car, you can assume that it has four wheels, whether or not you can see all four in the image. Similarly, you can intuit that the pavement behind the car in the image is still there. You can even guess that there are seats inside the car, as well as a steering wheel and a radio.  

    DeepMind’s New Functionality is a Game-Changer 

    This is the AI equivalent of imagination. An AI capable of understanding spatial scenes and making predictions based on limited data is a quantum leap forward. What’s more, the developers overseeing DeepMind didn’t anticipate this functionality.   

    Ali Eslami, a Google team leader, had this to say in a phone interview with Ars Technica. “One of the most surprising results [was] when we saw it could do things like perspective and occlusion and lighting and shadows. We know how to write renderers and graphics engines.” However, the most compelling thing Eslami found that the laws of physics represented were discovered by the software. The software was said to be a “blank slate,” and it was able to “effectively discover these rules by looking at images.” 

    We’re living in an exciting era. AI advancements have been coming faster and faster, and soon we may even see fully aware learning software. This is both exhilarating and terrifying.  

  • Microsoft Seeks to Break into Retail Stores, And Kinect May be Involved

    Microsoft Seeks to Break into Retail Stores, And Kinect May be Involved

    Remember Kinect? It seemed like Microsoft trying to infringe on Nintendo’s turf at the time. If you remember the distant epoch of 2010, the Wii was still riding on some serious popularity. Motion-controlled gaming was huge, and Microsoft wanted a slice of that pie. So, to compete, they released a camera and microphone combination called Kinect.  

    The real wonder of the technology wasn’t the camera, though, it was the software powering it. Fast forward to 2018, when AI is a breath away from being a reality, and Microsoft is reviving the once-dead Kinect software in some innovative ways. And one is the elimination of checkout lines and cashiers. 

    Amazon’s Influence 

    The influence of bookseller-turned-juggernaut Amazon on the face of technology is hard to overstate. One example is Amazon’s cashier-free convenience store, Amazon Go. If you missed it, the first Amazon Go opened in Seattle last year and has a truly unique business model.  

    There are no cashiers, and there is no checkout line. Instead, you scan in past a turnstile with your Amazon Go app, which has your credit card info on file. Then you grab the stuff you want and technology in the store tracks what you have. When you leave, the app bills you for whatever stuff you have in your bag. Simple, right? Well, it’s powered by a pretty complicated suite of technology.  

    So complicated, in fact, that many traditional retailers are made quite nervous by it. How could they afford to implement such a complex change in their stores to stay competitive? Amazon is certainly saving costs by not having cashiers, and customers love the convenience. Could this spell doom for traditional retailers? 

    Not if Microsoft Has Anything to Say About It 

    Enter Kinect’s new lease on life. Microsoft is currently working to help implement technology like Amazon’s in traditional retail store like Walmart and Target. While details are currently slim, the move makes sense. Retailers scared of becoming irrelevant can pay for Microsoft’s Kinect AI and stay competitive. Microsoft, in turn, keeps up with Amazon without having to invest in any inventory or construction.  

    At the moment, it’s not set in stone. There are currently no announcements as far as roll-out or implementation of this technology. It’s still in the planning phase. However, if it materializes, it could spell the end of a lot of retail jobs. Hopefully the retailers affected would find other positions for the employees losing jobs. 

  • Helsinki to Lead the Way For AI Education with Free Class

    Helsinki to Lead the Way For AI Education with Free Class

    Helsinki University in Finland has launched a course on artificial intelligence — one that’s completely free and open to everyone around the world. A lot of tech giants like Google now have divisions working on artificial intelligence projects, and even whole non-tech industries already depend on AI for various tasks. Its scary to think how much we already depend on AI and what it will entail in the future. Its best to jump ahead of the curve on this, especially you youngsters. Parent enroll your teens in this class for the summer.

     

    More Details on Helsinki’s Free AI Class

    Helsinki’s course focuses on the basics, starting with defining what AI is and explaining how it can solve problems. It is more of a beginner course. The fist part of the materials is a discussion on how we use AI  already  in the real world. In the second part of the course, they explain how machine learning works and what neural networks are. It will apparently take you about 30 hours to complete the course. Students in Finland can even earn academic credits through the Open University.

    For us Americans, you can  enroll in the course outside Finland. You will have an option to receive a certificate that can be posted on the Linkedin page. Everyone who completes the course will also get a certificate (PDF) emailed to them. You have an option to do the course at your own pace, but they recommend trying to finish in 6 week. According to their understanding, having a deadline makes it more likely that students will finish the course.

     

    The Final Thought

    This is the big growth sector right now and will be even more in the future. Helsinki University hopes that one percent of the Finnish population – some 54,000 people – will take the online course this year. So far 24,000 have signed up and that is alot. If you think about that number and the magnitude in which AI will dominate our next half century. Finland will be the worlds new super power.  It could  and probably will be the first nation to fall to  Elon Musk’s prophecy of the Immortal AI dictator.  The US needs to invest heavily in the next generation of programmers now and have them lead the curve on AI.

  • Top Ten Ways AI Will Change Your Life Forever

    Top Ten Ways AI Will Change Your Life Forever

    After watching Google Duplex demonstrations at this year’s I/O conference, one thing is clear: Artificial intelligence will change our lives. All manner of practical applications are being explored by leading AI companies like Microsoft and Google. To help you wrap your head around what the future holds, we’ve rounded up 10 ways AI will be changing your life forever!

    Speech Recognition

    01-ai

    Photo Credit: Robohub

    The technology that powers all AI assistants, speech recognition tech is very exciting going forward. The ability to organize your life and appointments with your voice alone is engaging and exciting to a degree that is hard to overstate. Alexa, Google Assistant and the like all make excellent use of this technology to power their unique promise of being user-friendly, dynamic and useful to users of all skill levels. This technology can also help disabled people type by using their voices, increasing the accessibility of personal computers and other platforms.

    Making Appointments

    02-googleduplex

    Photo Credit: Android Central

    Assistant calling was unveiled at 2018’s I/O conference by Google, in the form of their Google Duplex technology. This revolutionary AI functionality allows the Google Assistant platform to make phone calls for users, setting up appointments, reservations and the like. This is exciting for users with phone anxiety or trouble speaking, or just for users who are busy and don’t have time to call in themselves. While this raises a few ethical questions regarding machine voices being indistinguishable from human voices, the practical applications of the technology are undeniable.

    Machine Learning

    03-machine-learning

    Photo Credit: Future of Life Institute

    Machine learning is largely considered the marker of “true” artificial intelligence. By synthesizing concepts encountered and reacting based on past information, AI with machine learning capabilities would be able to react in ways humans can’t predict. These applications could be applied to market trends in order to assist stock brokers or as assistants to surgeons during procedures. The potential uses of this type of learning are staggering: imagine intelligences reasoning and problem solving with the speed and accuracy of a machine during such tense activities as piloting a spaceship or during peace talks between warring nations.

    Advertising

    04-facebook

    Photo Credit: The Sun

    People are still understandably anxious regarding companies like Facebook using algorithms to target them with content. However, this will hardly stop the future of learning algorithms and, eventually, true AI using their data collection to target users with advertisements and articles. As with many AI-related fields, this is a hotly debated subject, raising ethical questions as to the responsibility of advertisers and media platforms. It seems unlikely that such reservations will stop companies from leveraging the considerable power of machine learning in order to boost their profits.

    Personal Assistants

    05-echo

    Photo Credit: B & H

    The most obvious and immediate effect artificial intelligence has on the lives of everyday consumers is through the virtual assistants we all now carry in our pockets. Using a virtual assistant has become second nature to most smart phone owners. The act of asking Siri or Cortana or Google to make a phone call for us is natural and easy. Incorporating similar technology into smart speakers and smart displays was a logical next step: Google and Amazon want to bring their brand of AI into your life and make themselves indispensable. Individual reliance on AI assistants will likely become as commonplace as reliance on cell phones and the internet is now.

    Data Gathering

    06-ai

    Photo Credit: News Medical

    A pressing application for businesses, sifting through and organizing data is likely to be the primary role of corporate AI. For companies with extensive logs and ledgers, the ability to instruct an AI to quickly sort data and parse it for relevant information is likely to change the face of business. Reading market trends and sales data at speeds impossible for humans, AI could give companies an edge over competition and help them make smart business decisions. Similar to artificial intelligence applications in advertising, AI use as information comb is likely to be a primary function of AI for business.

    Biometrics

    07-biometrics

    Photo Credit: Security Exhibition and Conference

    For consumers, thumbprint scanners that unlock their phones are a well-known application of biometric technology. Biometrics also, however, have many applications in relation to AI. Using AI to learn about people from their biometrics would allow for numerous advances. For example, early warning of diseases like cancer and diabetes would become more common. Additionally, such AI could help in developing treatments and cures for diseases monitored through biometrics. Market applications of biometric AI tech would likely include AI interfaces in retail stores that allowed for checkout using thumbprint or facial recognition. Technology in this vein is already in use via services like Apple Pay.

    AI-Optimized Hardware

    08-ai

    Photo Credit: Out of the Box Science

    As computer processors and graphics cards have begun pushing the limits of what their hardware is capable of, manufacturers have begun speculating on the potential of AI-assisted hardware. Using advanced learning AI to make micro-adjustments on the fly could help high end graphics cards or processors speed up beyond their current limitations. Coupled with experimental quantum computing techniques, such technology would likely reshape the face of computing as an industry and redefine what computers are capable of.  Such a leap ahead could potentially dwarf previous technological advancements.

    Automation of Processes

    09-automation

    Photo Credit: Forbes

    Much like the robotics boom that helped automate factory labor, AI technology will likely see automation of numerous work tasks previously able to be done only by humans. Utilizing Google Duplex-like technology, phone customer service could be fully automated. Rather than staffing buildings full of workers to answer phone calls, companies could outsource all phone calls to an AI programmed to answer questions. This is only one example: programming, research and development, analytics and market research could all be tackled by learning AI. Such advancements could be monumental, allowing for full automation of nearly all virtual jobs.

    Robotics

    10-ai

    Photo Credit: CardsChat

    Beyond just automating jobs that occur in a virtual space, complex AI software housed in advanced robotics could automate physical labor on a scale previously impossible. Agriculture, construction and maintenance work could potentially all be automated by AI-powered robots. While this likely sounds like high-concept sci-fi, such a future is becoming more possible, and more likely, with each passing day. As companies continue to push the boundaries of what AI is capable of, a future where all work tasks are automated becomes more and more likely. This could lead to a utopia where all people are free to pursue their own desires, or a dystopia where people are ruled robots. Just kidding! Or maybe not. We’ll cross that bridge when we get to it! For now, I’ll suspiciously eye my Amazon Echo and hope it doesn’t try to kill all humans anytime soon.

  • Sony’s “Robo-Puppy” is Back!

    Sony’s “Robo-Puppy” is Back!

    Initial thoughts

    • Highly advanced puppy AI
    • Adorable movements, and physical cues
    • Extreme price tag

    LAS VEGAS- At the 2018 CES convention Sony unveiled Aibo, the world’s most adorable robot-puppy. The world has seen Aibo in the past, but Sony has re-imagined and reanimated the once stiff “robo-puppy”, and he’s melting hearts.

    Image result for aibo

     

    Let’s Meet Aibo

    You remember seeing Aibo years ago at the old KB Toys store doing binary back-flips for attention but to no avail. Well in 2017 Aibo was brought back to life by Sony and he’s actually pretty great! At first glance, you see the initial robust face lift Aibo has, along with the new OLED eye displays he comes equipped with. Aibo’s movements are shockingly fluid and his his facial impressions are full of emotion. What gives Aibo his life-like movement is the 3-axis gyroscope and 3-axis accelerator system, along with three facial sensors to give him those darling facial cues. Aibo has a total of 4,000 parts and 22 moving actuators, which is what makes his movements so impressive.  At the 2018 CES, Aibo was slow to respond to some of the voice commands, but usually processed within 10 seconds. Regardless, this pup can boogie around, and do so with a very high cute factor.

    That’s right, He Needs Training!

    Aibo’s new AI system is a whole other animal in itself (Pun intended). Aibo is able to sit, stay, shake, and even cutely responds to head Image result for aiboscratches. Aibo is also programmed with learning AI capability so the more time you spend with him, the more he learns about his master. Aibo also will respond to training! Sony’s new hook for Aibo is that you will have to train it through voice commands and positive reinforcement, just like you do a real dog. Aibo has the ability to mimic a real puppy due to the series of sensors and cameras he has which help him to understand both his environment and your interactions.

    This little pup is built with three touch sensors (one on the back, one on the top of the head and another under the chin), two cameras image recognition and area mapping, one Time of Flight sensor for proximity detection, a special sensor to detect your presence from behind, four microphones and a motion detector. Aibo even comes with a little pink plastic ball which he can fetch! Dawwww.

    Image result for aibo

    The Cost…Wowsers!

    At a whopping $2,000.00 this is one expensive thoroughbred! On top of the initial $2,000.00 users pay a monthly $26.00 for the Aibo services. Those services include continual software updates and cloud storage to help Aibo retain, learn, and grow. To most, that cost is way too high for what is essentially a toy, but other’s might see it a small price to pay for what is a very advanced and innovated pet. Right now Aibo recognizes English and Japanese, but other language will be added in the near future. Like most new tech, the cost for Aibo will likely drop over time but for now, get out your wallets!

    The Upside

    • Sony has made incredible advances in Aibo’s robotics and it’s very impressive
    • The learning AI will keep Aibo relevant in your home.
    • Because of Aibo’s AI structure, the capacity to fall in love with a robot is REAL…

    The Down

    • The high price tag obviously. Paying $2,000.00 plus $26.00 monthly is not very realistic for the average consumer.
    • Aibo still has a lot of glaring robotic issues to address before major release.
    • Not a lot of aesthetic options to make Aibo’s exterior to your liking. Just the plain “cold steel” look for now

    Image result for aibo

    The Solutions

    If Sony can make Aibo a much more affordable ‘Robo-puppy” and really tap into the pesonalization of Aibo’s exterior, it could really be a home-run for Sony. Aibo’s movements, responses, and emotions seem genuine…yes that kinda of terrifying, but also incredible at the same time. Once the kinks are worked out, I feel Aibo could really offer someone emotional and physical companionship on a very real level.

     

     

     

     

     

  • AI can read! Tech firms race to smarten up thinking machines

    AI can read! Tech firms race to smarten up thinking machines

    By MATT O’BRIEN, AP Technology Writer

    PROVIDENCE, R.I. (AP) — Seven years ago, a computer beat two human quizmasters on a “Jeopardy” challenge. Ever since, the tech industry has been training its machines to make them even better at amassing knowledge and answering questions.
    And it’s worked, at least up to a point. Just don’t expect artificial intelligence to spit out a literary analysis of Leo Tolstoy’s “War and Peace” any time soon.

    Research teams at Microsoft and Chinese tech company Alibaba reached what they described as a milestone earlier this month when their AI systems outperformed the estimated human score on a reading comprehension test. It was the latest demonstration of rapid advances that have improved search engines and voice assistants and that are finding broader applications in health care and other fields.
    The answers they got wrong — and the test itself — also highlight the limitations of computer intelligence and the difficulty of comparing it directly to human intelligence.

    Stanford doctoral student Pranav Rajpurkar, who helped develop the Stanford Question Answering Dataset.

    ERROR! ERROR!

    “We are still a long way from computers being able to read and comprehend general text in the same way that humans can,” said Kevin Scott, Microsoft’s chief technology officer, in a LinkedIn post that also commended the achievement by the company’s Beijing-based researchers.
    The test developed at Stanford University demonstrated that, in at least some circumstances, computers can beat humans at quickly “reading” hundreds of Wikipedia entries and coming up with accurate answers to questions about Genghis Khan’s reign or the Apollo space program.

    The computers, however, also made mistakes that many people wouldn’t have.
    Microsoft, for instance, fumbled an easy football question about which member of the NFL’s Carolina Panthers got the most interceptions in the 2015 season (the correct answer was Kurt Coleman, not Josh Norman). A person’s careful reading of the Wikipedia passage would have discovered the right answer, but the computer tripped up on the word “most” and didn’t understand that seven is bigger than four.

    “You need some very simple reasoning here, but the machine cannot get it,” said Jianfeng Gao, of Microsoft’s AI research division.

    HUMAN VS. MACHINE

    Like the other tests, the Stanford Question Answering Dataset, nicknamed Squad, attracted a rivalry among research institutions and tech firms — with Google, Facebook, Tencent, Samsung and Salesforce also giving it a try.
    “Academics love competitions,” said Pranav Rajpurkar, the Stanford doctoral student who helped develop the test. “All these companies and institutions are trying to establish themselves as the leader in AI.”

    LIMITS OF UNDERSTANDING

    The tech industry’s collection and digitization of huge troves of data, combined with new sets of algorithms and more powerful computing, has helped inject new energy into a machine-learning field that’s been around for more than half a century. But computers are still “far off” from truly understanding what they’re reading, said Michael Littman, a Brown University computer science professor who has tasked computers to solve crossword puzzles.

    Computers are getting better at the statistical intuition that allows them to scan text and find what seems relevant, but they still struggle with the logical reasoning that comes naturally to people. (And they are often hopeless when it comes to deciphering the subtle wink-and-nod trickery of a clever puzzle.) Many of the common ways of measuring artificial intelligence are in some ways teaching to the test, Littman said.
    “It strikes me for the kind of problem that they’re solving that it’s not possible to do better than people, because people are defining what’s correct,” Littman said of the Stanford benchmark. “The impressive thing here is they met human performance, not that they’ve exceeded it.”