Musk's and Google AI Beating Human Pro gamer's

Musk's and Google AI  Beating Human Pro gamer's  


It's no secret. That artificial intelligence is progressing rapidly. It seems like every couple of weeks There's an unexpected development that takes people by surprise
It's starting to become clear that we're near the start of a new era that will include artificial intelligence

Those people that aren't paying attention are likely to be taken aback by the things that are going to be possible in a Few years Some voices have spoken out about the dangers and need for safety measures that must go along with developing powerful way I one of the loudest voices of concern was that of Elon musk in a strange twist elands non-profit artificial intelligence start-up
Which is called open AI and has just achieved a pretty remarkable feat in this .article. We'll take a look at what's going on So what's the background story?


Every year the game developer valve hosts the competition for expert players of the game dota 2 a popular online multiplayer game The competition hosts professional players from all around the world all fighting for a twenty four million dollar Grand prize this year there was a special guest competitor who wasn't human it was an AI trained by the engineers of Elon musk startup open AI When put up one-on-one against one of the world's best players of dota 2 a crowd favorite Dendi the artificial intelligence one Even a year ago. Nobody was sure if this kind of thing was possible


Dendi was surprised that an AI could outplay a human
He said that the AI quote felt like a human, but a little like something else end quote this feat was achieved with just two weeks of real-Time learning by the AI
The engineers state that during this training period It accumulated

Lifetimes of experience they also state that the rules of dota are so complicated that if you just wrote down Pre-Programmed rules and program some code to follow it the end result wouldn't even be as good as an average player The artificial intelligence was trained from scratch with no knowledge of the game it played against itself over and over again until it had mastered the game
How on Earth does a computer learn to play Delta? Yeah?
So this spot is quite unlike any what you've seen before

We've coached it to learn just from playing areas itself, so we didn't hard code in any strategy We didn't have it learned from human experts just from the very beginning it just keeps playing it's a copy of itself it starts from complete randomness, and then it makes very small improvements and Eventually it is the pro. Level these are pro players these are human brain
So you just telling me this robot has failed so many times
It's actually better than professional dota players


it's played for really lifetimes of experience and
It's great so many games of dota. It's explored many different strategies learn to be learn to exploit other people who beat and it's just played far more into the strategy space than any human as

Musk is hailing this achievement as the first time that artificial intelligence has been able to beat professionals in competitive esports With esports being considered for the 2024 Olympics. This is certainly interesting So this actually isn't the first notable achievement by open Ai they've actually done some pretty cool stuff in the past The company actually invented a method where humans can interact with a robotic AI and teach it just like you would a human here's how it works

First a human wears a Vr

Headset does a task and the robot watches and then imitates the task in real-time without ever having done it before In this case it's learning how to stack some lego blocks that task sounds very simple for a human But it's actually extremely hard for a machine to do this

The AI manages to do this by having its visual neural network trained on a large set of images of what blocks and simulated environments could look like

This first visual neural network then sees the data to another neural Network called the Imitation Network After this training and after just one single original demonstration of what to do the robot can now stack blocks?

even with different colored blocks placed in different positions every time

This means that the robot has to and does perform different actions to the scenarios than it's already seen The end goal is to create an AI that can adapt to new and unpredictable environments Okay, now back to open a is recent achievement

Elon has tweeted that his feet are feeding some of the best dota players in the world with an AI is a task much more Complicated than chess or the board game go



Now I'm not so sure that I agree with Elon statement and esport is more complicated than go chess. Yes, definitely Let's go perhaps. Not some state that goes the most Brilliant game ever made Go is a 3000 year old Chinese board game in which an AI called alphago from the company deepmind recently beat the World Champion in a series of playoffs

This event was held the biggest moment in artificial intelligence

Not expected for another decade. He is deepmind Ceo Dennis hasset us talking about some of the complexity of go Nothing about goa. Is that he only has two rules I could teach you the game in five minutes But it leads to incredible complexity. It's probably the most elegant game that mankind has ever devised What happens in Asia and Career in Japan or and China?

Is if you show promise in the game of go of at the age of sort of five or six or seven?

You get taken out of normal school and get put into go school where you study go 12 hours a day seven days a week With your peers who are also trying to become professional go players this has taken really seriously, and it's been like this hundreds of Years

Now one way to illustrate the complexity of the game


Is that there are not more board configurations in the gamers go than there are atoms in the universe?

So there's no way that you can solve this game through broad group Force calculation It's much too complex even if you took all the compute power in the world and you ran it for a million years That wouldn't be enough computer power to calculate all the variations ingo

If children are literally pulled out of their school to spend half of their lives training to master this game I can't really just dismiss it and say that dota 2 is more complicated although Again that this could depend on your definition of combo. I hate it, but it's just something to think about on the topic of Deep Mind

I imagine that some of you would be interested in a few juicy updates about what deep mind's alpha go is up to Also, what has humanity learned from that moment when the Go world champion was defeated by an artificial intelligence?

Well while playing it's winning match against the go champion alphago played some very strange moves They ended up giving it the advantage to win the match

To be clear in over 3,000 years of humans studying and playing go we've never thought of playing the way that alphago did Alphago's moves have now been used in go schools to train students and expand their way of thinking about how to play the game and Further to this deep Mayan plans on using the alphago algorithm for more general purpose Functions as it shows great promise in its ability to learn and I think of AI is This incredibly powerful tool that will augment human ingenuity and unlock our true potential In fact one way you can think about AI and indeed alphago is like AI think of it as analogous to the hubble telescope A kind of ultimate tool to explore the universe of course the go players alphago Was allowing them to explore their universe of the game of go?

And I think there are many other domains

in the real world that suffer from this kind of combinatorial explosion that go has Now obviously as I said at the beginning we test our systems on games because they're the most convenient way to develop our AI algorithms But obviously ultimately we're not interested in just being good at games

We want to translate those algorithms into the real world and be useful and make huge impacts on real world situations And one reason we could believe we can do that is because we're building general-Purpose learning systems They're not being handcrafted for the get further. That's which. They'd like chess They've actually we've actually built

We believe general-Purpose algorithms that could be taken from the games that we test them on and apply to real world and we're applying these to all sorts of other areas healthcare robotics and even Optimizing Data centers

So a variation of Alpha Go we took



Over last summer, and we applied it to Google's data centers, and we managed to save 15% of the power that was used in those data centers by controlling the cooling systems more efficiently.

So there you have it. As always it's some very interesting times in the field of artificial intelligence It seems that artificial intelligence playing games would be a great way to enable it to learn, but further to this, the knowledge learned could be applied in other field making the act of playing games much more important than order first seem. Both Deep Mind and Open AI are going with this strategy. So what's your view on the story?

I'm sure you're pretty used to it.



Comments