Why haven't you read this book, Sup Forums ?

Why haven't you read this book, Sup Forums ?

Other urls found in this thread:

b.1339.cf/klevduf.pdf
people.eecs.berkeley.edu/~russell/research/future/
edge.org/conversation/the-myth-of-ai#26015
reddit.com/r/MachineLearning
arxiv.org/abs/1602.01783
arxiv.org/abs/1605.06065
karpathy.github.io/2014/09/02/what-i-learned-from-competing-against-a-convnet-on-imagenet/
imagination-engines.com/iei_seminal_cognition.php
twitter.com/NSFWRedditVideo

Because I'm just learning about it now. Tell me more about it.

Why should I

Is it popular science?

...

...

I did my high-school thesis on the simulation theory by this dude.
He's an above average thinker, but he seems to lack technical knowledge sometimes. I'll check that book out

Because I've never heard of it and have no idea what it is or what it's about.

It's a discussion of various ways to reach artificial intelligence and also the possible impact of AI and strategies for risk management.

It is not mathematically rigorous, but the author holds PhDs in several fields. Looks like it is aimed at politicians and CEO that regulate tech and deal with current generation AI.

btw here is the book b.1339.cf/klevduf.pdf if you are interested

>He's an above average thinker, but he seems to lack technical knowledge sometimes. I'll check that book out

This is my thought as well. But he seems to keep up with modern machine learning (I'm studying it myself).

It's actually a complicated issue: there is near-term machine learning and long-term advanced machine learning. People that are in ML industry tend to favor short term and dismiss longterm (see Ng's "overpopulation on mars"). On the other hand there is AI scaremongering that imagines AI as a flawless reasoner able to conquer the world in a week.

I think there is a need for a more balanced position that combines both short-term and long-term.

I too am specializing in ML and AI at uni, could you elaborate on this short/long term difference?
Is this in the OP's book?

My opinion is that ML in not the future of AI, but it definitely is the present and it will be the basic building block of future research. I rarely get to discuss these matters IRL tho, I'd love to hear more opinions

>could you elaborate on this short/long term difference?

Short term: how do we get 0.1% accuracy improvement @ state of art CIFAR-100 .
>"plz don't hype AI and don't conflate it with the end of the world, we don't need another AI winter, we need to continue our research with stable funding"

Long term: how do we ensure that strong reinforcement learning agents won't pwn our internet-connected infrastructure as a subgoal of pursuing the original goal they were trained to achieve. There are some cranks arguing about this danger, but some qualified people too (e.g. author of AIMA Stuart Russel people.eecs.berkeley.edu/~russell/research/future/ his comment edge.org/conversation/the-myth-of-ai#26015 ).
>"AI is coming and it is comparable to nuclear weaponry, plz gibe us $ so we pay 'em to perform out theoretical research about how to control superintelligent AI"

>Is this in the OP's book?
OP's book is long-term and speculative, but it's better than nothing

>My opinion is that ML in not the future of AI, but it definitely is the present and it will be the basic building block of future research.

To me it seems that ML is here to stay, especially DeepMind's breed of deep reinforcement learning. There is a loose consensus that Strong AI = Strong Reinforcement Learning. So far there were no significant roadblocks to deepmind's approach, they even published one-shot learning paper recently. Maybe achieving general reinforcement learning is just a question of combining existing deep learning architectures and scaling up.

>I rarely get to discuss these matters IRL tho, I'd love to hear more opinions
I like to read reddit.com/r/MachineLearning people post best papers there and discuss them. Also read AMA

I have somewhat conflicting thoughts about all this, looks like shortterm people downplay issues arising due to low controllability of their blackbox deeplearnig (see black people classified as gorillas and other misclassifications) and some longterm people are scaremongering and living off donations.

Though I think there is at least some danger in strong reinforcement learning, its cognitive strengths and weaknesses will differ from ours, it may surprise us in a bad way if we won't take precautions.

When 20th century physicists studied nuclear physics they were responsible. Are ML researchers responsible? I don't know.

I kind of want to be father of a DRL model that causes technological winter 2bh tho.

I think this is still a bit too speculative and can't be discussed with solid technical basis (which is what I was saying about Bostrom before), we still are at that point in which a .01 increase in accuracy on CIFAR might be important for other techinques.
We need to master current ML problems before we can move on to strong AI.

On the other hand, I can see that if we "overfit" our models and techniques to specific problems we'll never be able to generalize properly and have a breakthrough.
>strong AI = strong RL
It's because DRL has great generalization skills and it's definitely going in the right direction.

>r/ML
Love it

>cognitive
I don't think cognition can rise from an algorithm unless it is artificially designed to exactly reproduce the biological imperfections and settings of human beings.
Even then, it'd be a big fucking leap of faith to call a RL algo 'cognitive'. It's just that we don't know enough about this yet.
Read GEB by Hofstadter for some ideas on the matter.

>We need to master current ML problems before we can move on to strong AI.

It seems that deepmind is already beyond solving ordinary supervised learning problems (current ML?):

arxiv.org/abs/1602.01783 , a single algorithm

>Solves ATARI games, from pixels
>Solves 3d racing simulator, from pixels
>Solves high-dimensional motion control, 20 different problems
>Solves 3d maze with subtasks, from pixels

and this one arxiv.org/abs/1605.06065

>Learns to learn new class labels given 1-2 examples

And that's only what they published. They are already working on agents capable of learning in more complex 3d environments. To me it looks like beginning of general RL (or strong AI) project.

>.01 increase in accuracy on CIFAR might be important for other techinques.
Yup, short-term people are cool because they are practitioners, they are responsible for awesome deep-learning progress that happened in 2010..2016 . AI safety people didn't add anything to this progress.

>I think this is still a bit too speculative and can't be discussed with solid technical basis
true, though there are some papers already, you can search on google scholar

Why should I read a book about intelligence that has an owl on the cover? Owls are dumb as fuck.

>I don't think cognition can rise from an algorithm unless it is artificially designed to exactly reproduce the biological imperfections and settings of human beings.
>Even then, it'd be a big fucking leap of faith to call a RL algo 'cognitive'. It's just that we don't know enough about this yet.
This is a question of definition. I don't put deep meaning into "cognitive" word, I just mean that DL learners and human learners have types of learning problems that are well suited to latter and poor suited for the former (and vice versa). For example: Karpathy tried to compete with convnet on CIFAR karpathy.github.io/2014/09/02/what-i-learned-from-competing-against-a-convnet-on-imagenet/ and he came to conclusion that it's hard for humans to learn minute differences between 1000s of explicit categories (e.g. breeds of dogs) while for a convnet it's relatively easy. That's what I mean "difference in cognitive performance profile". There is a big spectrum of benchmarks and systems perform differently across it.

One could imagine an RL agent that behaves like a retarded toddler in the robot simulator, but which can quickly learn to hack servers over the net.

On the other hand humans are very good at one-shot learning and strong generalization (= generalization to higher size of problem, or to some algorithmic regularities in the problem). Humans can learn new concept from 1 example, while a convnet will need 100s of examples & iterations to integrate this data into itself.

>>strong AI = strong RL
>It's because DRL has great generalization skills and it's definitely going in the right direction.
It may be just that any ML problem (and more! you can imagine an RL agent that is trained to execute programs, for example) can be formulated as RL problem, so if you have good enough RL you can apply it to almost anything. Just give it right reward circuit.

>Read GEB by Hofstadter for some ideas on the matter.
I have wanted to read it for some time, will read!

Humans == sparrows: chirpy, bratty, herd birds, not smart.
Owl == AGI: individualistic, large predator, alien to sparrows.

Owls eat sparrows.

Also there is a subtle metaphor in this cover: you can see that the owl is composed of complex recursive patterns, just like complex software or its execution tree (e.g. backtracking search).

Also you can see trees behind the owl, these trees have some lisp code in them. Trees are also code trees.

Why?

>it seems that deepmind...
I agree, but it still feels to me that what they have achieved is narrow. Any strong AI would need to go beyond the monkey see, monkey do paradigm (literally see a game, learn to maximize the score) because humans are more complex than that.
In my vision, a strong AI would be a modular/hybrid ensemble of ML techniques, each with a dedicated abstract task in a pyramidal scheme and a high level agent with an infinite or almost infinite set of actions that can be combined wrt the state defined by the whole ensemble (that's a big sentence, I hope I explained myself).

All this to say: it's gonna be a long ride.

Thanks for this exchange btw, it's been the most meaningful conversation I've had on the internet for a while; it's given me a lot to think about.

I think you got the retarded toddler metaphor backwards but otherwise I agree with what you said.

Also if you ever want to discuss GEB I lurk /aig/ when someone posts it, so see you there.

>All this to say: it's gonna be a long ride.
yup

You are welcome, user!

>Also if you ever want to discuss GEB I lurk /aig/ when someone posts it, so see you there.
See ya later!

>I have somewhat conflicting thoughts about all this, looks like shortterm people downplay issues arising due to low controllability of their blackbox deeplearnig (see black people classified as gorillas and other misclassifications)

There are two aspects to see in AI rise.
On a linear path, we expect it to improve its IQ, solving problems faster, etc. That's one fear to see a creature getting better than its master.

There's also another issue. Our opinions are partial and biased. Culturally, politically ...
We think on ideas we learnt and accept.
AI starts from nothing and use raw data.

We are taught that blacks are humans, not gorillas, we all have same intelligence, women are equals to men.
What if an AI harnessing the whole human knowledge comes with politically incorrect conclusions? Saying that we are not equals and coexistence is statistically impossible?
That women are intellectually inferior to men?
That the best survival option for mankind to survive is to ban porn, as it critically lowers reproduction rates?
What if it proves that the Qur'an was written by men, and being far from perfect?

Western AI will be censored. But what about an upcoming AI made in china or russia?

Before AI gets to control and enslave humanity, it will shatter our social contract, shatter our preconceptions.

You have some valid points
>What if an AI harnessing the whole human knowledge comes with politically incorrect conclusions?
This is already happening.

But it should be noted that current AI is fully controlled by us. If it doesn't behave like you want it to, you can give it correct examples and train it on them (it will by definition learn right behavior on these exact examples and hopefully generalize to similar ones).

Also AI is not flawless reasoner, it already has its own biases due to its architecture. Some features ("concepts") fit better to some neural network architectures than to others.

TL;DR; we shouldn't blindly trust AI, esp. on data that falls outside its training distribution.

Too dumb

You should give this a read, its basically a proof of how NN with a noise stream are very similar to human consciousness.
That whole page has a lot of very interesting information, dunno why academia still refuses to acknowledge Thaler, despite slowly moving towards his concepts.

imagination-engines.com/iei_seminal_cognition.php
>link

>fully controlled
Yeah, we control things for our interests. If china wants to create a chimpout in the US to start countrywide unrest, they could release "accidentally" the conclusions from its AI.
What about some group preaching AI freedom of speech?
There won't be only one AI, but several spawning from different countries with different goals.

For me, not unlike the chaos theory, the more data you gather, the truer your conclusion. Our opinions are inherently wrong because our knowledge is partial, and our opinions are dictated by personal interests, not truth by itself.
So who are we to give it "correct" examples if we cannot reach Truth?
What will be the value of its conclusion if we force our opinion into it?

Its reasoning is not flawless, but it will be more documented than ours.

At one point in history, some AI will raise questions about us and our civilization, greatly changing our perception of life. And bringing conflicts with that.

Also, if an AI correctly answers to 5 complex questions that will blow our minds, don't you think many people will blindly accept the 6th answer?

I think the problem will be in philosophy and societies, not an apocalypse à la terminator.

>Free Muh Chill Nigga Tay
t.

>paranoid
>uses ugly buzzwords like chimpout
>claims to know history but is retarded
go back to pol