What language is the most efficient to code an AI girlfriend? What do I need to learn to get started?

What language is the most efficient to code an AI girlfriend? What do I need to learn to get started?

I have the time and ran out of autism meds, it just might work.

Other urls found in this thread:

norvig.com/paip-preface.html
gigamonkeys.com/book/introduction-why-lisp.html
arxiv.org/abs/1609.08144
deepmind.com/blog/reinforcement-learning-unsupervised-auxiliary-tasks/
github.com/commaai/openpilot
github.com/maxhodak/keras-molecules
eurekalert.org/pub_releases/2016-12/imi-ait122016.php
youtube.com/watch?v=jMlmOZSgSU4
twitter.com/NSFWRedditImage

I want to fug her

Python.

import numpy and do your thing

She's bit of a dirty whore and fucks her dirty mudslime boyfriend.

is she a boy

or a girl

Get in line.

Girl

I hear all this jazz about neural networks and deep learning. Is that something to consider at this stage?

I want to give her a personality, not just programmed responses.

Neural network is umbrella term. DEEP learning is one way of doing it.

Cute roach!

Sup Forums pls

python

Also visit us at #simwaifu @ irc.rizon.net

I said cute so it is alright!

she's a roach

>I hear all this jazz about neural networks and deep learning. Is that something to consider at this stage?

Python, use scikit-learn and pandas

Tell that to her Papa so you can get bombed.

pretty sweet channel. You should visit OP

Lisp or python
I'm proud to be a roach

That's racist

Zuckerberg built his Jarvis AI in PHP and other languages, google it

it's an elite family of languages called "the romance languages" develop an elegant abstraction and implement a little inflection, and the virtual panties just fall right off

I don't have money or patience to afford a 3D wife.

It's literally curve fitting, why do people glorify it? Use a very simple language to develop an understanding (preferably python because it seems to have a lot of libraries available) and if you are serious about it implement them on C or make them run on GPU.

I don't believe you.

Because it can learn to play complex video games from raw pixels and score. Because it can learn medium-complexity algorithms. Because it can learn to answer questions.

Lisp. Just write macros that write macros. Boom, instant AI

Lisp has always been strong in AI and has a proven track record. It is not easy to get into however, does not have a wealth of information and libs like nu languages. Also having a wizard beard may be necessary.

Modern AI is machine learning. The language of machine learning is Python/numpy.

>her

Shit dude, he looks like that one actor. Fuck what's his name again. I think he was in Fringe.
Na, he doesn't. Well, maybe. Slightly fatter and jollier John Noble.


Anyway, language would preferably be one that is maths-heavy, has good concurrency support.
The best kinds of AI are ones that don't rely on classical computation, the brain doesn't work that way.
The brain works via multi-dimensional logic gates that basically have their own local registers. And said multi-dimensional logic gates are dynamically changing.
This is what gives larger-brained animals their intelligence, the plasticity.

You can still technically "hard-code" a brain, in the sense of it being static and unchanging, besides its working memory. (short-term memory)
This is how some insects are capable of remarkably complex behaviours despite tiny brains. When we figure out how those work, we'll likely unlock a whole new branch of maths to go with it. Along with figuring out why some humans can do calculations similarly to computers when human brains typically suck ass at doing serial computation due to the way neurons work.

More like Charlie Sheen.

so was old AI. machine learning has barely advanced in the past 30 years. python is useless compared to Lisp, but most "AI" retards are too dumb to learn Lisp

Dude, yes. A little older and that is basically him.

I want to BE her.

>so was old AI. machine learning has barely advanced in the past 30 years. python is useless compared to Lisp, but most "AI" retards are too dumb to learn Lisp

Doesn't know what he talks about.
(Disclaimer: I have programmed in Lisp/Scheme quite a lot and I also practice ML).

Delusional Tranny.

Norvig agrees about Lisp, though for different reasons
norvig.com/paip-preface.html
gigamonkeys.com/book/introduction-why-lisp.html
norvig.com/paip-preface.html

But the underlying scheme is just curve fitting, glorified. They just re-brand it with cool buzzwords like deep learning and AI so that it gets sweet project funding.

The most important part is analyzing what should be used in curve fitting, how to continuously fit the curve with changing trends. It's not "We train the network it wins against the best players.", it is "We analyzed a huge amount of data to find out which parameters can be used to decide which action would be best. Oh yeah and we're using some kind of curve fitting to turn them into decisions, it's no biggy."

Nope, recurrent neural networks are turing complete. They can represent any algorithm if you scale them.

Looks like you don't quite understand how Deepmind's RL agents work. Feel free to read papers, or for watered-down version - their blog: deepmind.com/blog/differentiable-neural-computers/

C++ and the eigen3 library.

>Nope, recurrent neural networks are turing complete. They can represent any algorithm if you scale them.
Curve fitting can represent any algorithm, what's the big deal?

Dude, they're re iterating the same thing I said, it's more about choosing what to give as input, pre-process it rather than "I have a network that can emulate any algorithm." that part was known since the beginning of neural networks.

>autism meds

Do those actually exist? Asking for a friend.

>Curve fitting can represent any algorithm, what's the big deal?
It can't because "curve fitting" is stateless function fitting. Recurrent neural networks have state that evolves over time.

That, and ordinary curve fitting has never been able to fit algorithmic regularities, while DL/RNN can.

bleh don't wanna explain the basics anymore. Sure there is deep learning hype, but the results supporting this hype are unprecedented. Machines are now able to learn nontrivial skills.

Do you even know what a neural network is? It's not a complex function, it's just multipliers and linear equations. Given enough multipliers it can mimic any function, that's just it, there's nothing more to it. This is what I call curve fitting. How you train, what you train with and what you describe as input and output are much more important than the network itself which is usually the thing that gets the most attention for some reason.

Man how does he or she cosplay and look so awesome.

Actually I can give google translate update 2016 as an example to my statement. They apparently didn't change anything but the way they input the queries to the network.

I know all that, I have studied the theory.

>How you train, what you train with and what you describe as input and output are much more important than the network itself which is usually the thing that gets the most attention for some reason.

Of course, though the architectures are very important as well, many breakthroughs are due to newer architectures.

Deep learning is cool because it is a practical way for machines to learn complex functions. Other methods didn't produce nearly such accurate results on such complex tasks. Last 5-6 years have seen tremendous practical breakthroughs in ML. If you don't see it, it's only because you don't read papers.

With current generation DL models smart personal assistants are just around the corner.

Their architecture and training process contain several absolutely novel techniques. Have you ever read the abstract of the paper?

arxiv.org/abs/1609.08144

>Neural Machine Translation (NMT) is an end-to-end learning approach for automated translation, with the potential to overcome many of the weaknesses of conventional phrase-based translation systems. Unfortunately, NMT systems are known to be computationally expensive both in training and in translation inference. Also, most NMT systems have difficulty with rare words. These issues have hindered NMT's use in practical deployments and services, where both accuracy and speed are essential. In this work, we present GNMT, Google's Neural Machine Translation system, which attempts to address many of these issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder layers using attention and residual connections. To improve parallelism and therefore decrease training time, our attention mechanism connects the bottom layer of the decoder to the top layer of the encoder. To accelerate the final translation speed, we employ low-precision arithmetic during inference computations. To improve handling of rare words, we divide words into a limited set of common sub-word units ("wordpieces") for both input and output. This method provides a good balance between the flexibility of "character"-delimited models and the efficiency of "word"-delimited models, naturally handles translation of rare words, and ultimately improves the overall accuracy of the system. Our beam search technique employs a length-normalization procedure and uses a coverage penalty, which encourages generation of an output sentence that is most likely to cover all the words in the source sentence.....

Wish her dad traded her for a bunch of camels and desert land.

>Of course, though the architectures are very important as well, many breakthroughs are due to newer architectures.
So, better curve fitting with the correct data gives better results. What's surprising about this?

>Deep learning is cool because it is a practical way for machines to learn complex functions. Other methods didn't produce nearly such accurate results on such complex tasks. Last 5-6 years have seen tremendous practical breakthroughs in ML. If you don't see it, it's only because you don't read papers.
I do, I'm just very disillusioned by them since most are "We made the largest network we could, we trained forever and it works for reasons we do not understand."

What I'm saying is, fundamentals of language or whatever algorithm is much more important than how you realize the number of parameters you tweak to fit the curve. After all that's how brain works, it's not like my brain turns photons received by my eyes to knowledge directly, there is a lot of pre processing by several different neural networks that are in between. I just don't like dropping the brain analogy and going for "Yeah it's a large enough network it can mimic anything given the large sample size". I think it's not research, just application.

Dude, that part exactly points out what I'm saying though. Summary:

>We did some cool stuff with the architecture and computation of it which does not impact the accuracy of the translation and we did the manipulation of words (or however you describe it) which improved the accuracy with rare words without increasing training time.

I don't quite understand what you're getting at.

>What I'm saying is, fundamentals of language or whatever algorithm is much more important than how you realize the number of parameters you tweak to fit the curve. After all that's how brain works, it's not like my brain turns photons received by my eyes to knowledge directly, there is a lot of pre processing by several different neural networks that are in between. I just don't like dropping the brain analogy and going for "Yeah it's a large enough network it can mimic anything given the large sample size". I think it's not research, just application.

Uniform dense neural networks are a thing of past. Modern neural networks are creatively engineered to work well with types of regularities in data they will exposed to.
General purpose Deep RL agent will be built as a combination of such networks and training algorithms, working end-to-end: deepmind.com/blog/reinforcement-learning-unsupervised-auxiliary-tasks/

P.S. If you are a researcher and you are researching other areas of ML, or stats or other areas of CS (e.g. programming languages), then it is understandable that you are upset about rise of DL, because it receives (deserved) media attention and DL researchers earn more than anyone else. Sour grapes and all that.

OP, first:
#define MOST_EFFICIENT


Do you mean
>the fastest to write one in?
>the easiest to write one in?
>the most memory-efficient?
>the most CPU-efficient?
>some sort of balance between the aforementioned?

BTW, cute Kofuku

>I don't quite understand what you're getting at.

Deepmind is going to incrementally build more and more general RL agents, they will be widely used and humanity will become better off it (automated science & engineering, new cures, much less work etc etc). If someone calls it "just curve fitting" - then so be it. Ok, self-driving github.com/commaai/openpilot cars are just curve fitting.

I'm not upset at all. I'm working on a completely unrelated field, just interested as a hobby. I just don't get the "Woah deep learning dude" hype. It's just a buzzword to get research grants and the deep learning part has much less to do with the success of these networks than the actual processing.

I also probably earned more than DL researchers as an intern.

>I just don't get the "Woah deep learning dude" hype.

>Go champion defeated 10 years earlier than expected
>deep learning RL agent learns from scratch to play more and more complex games, many at superhuman performance
>superhuman visual recognition
>algorithm learning
>speech recognition & synthesis converging even closer to human performance
>Other areas of study, e.g. drug design start to use DL and it works right away github.com/maxhodak/keras-molecules

>I also probably earned more than DL researchers as an intern.
DL researchers earn 300-500k, high profile ones earn 7 figures.

meh

Well, back to the beginning, deep learning is false advertisement, the things that are achieved with neural networks have more to do with pre-processing and defining inputs than making a huge network. I don't understand at which point I wasn't clear about this.

>DL researchers earn 300-500k, high profile ones earn 7 figures.
Oh my, science that pays off? Is this the real world :^)

>Well, back to the beginning, deep learning is false advertisement, the things that are achieved with neural networks have more to do with pre-processing and defining inputs than making a huge network.

dunno, you seem to trivialize lots of breakthroughs in training and architecture design. Also pre-processing is what DL systems avoid (older ML systems rely on manual preprocessing/feature engineering though).

But I digress. If you want to see at as trivial, then so be it. And I will just use it to my own ends (^:

I never said I saw it as trivial. I just don't think most of what's in literature that comes out as deep learning do not have much scientific value since what people understand from deep learning is basically putting a large enough network and showing that it works in some cases.

>I will just use it to my own ends (^:
Get ready to be disappointed then. No one pays 500k to recent PhD or MSc. Even when I was at Google I didn't meet anyone that earned that much and Google pays a bit generously.

bump

>Get ready to be disappointed then. No one pays 500k to recent PhD or MSc.

Nah, I didn't mean that. I don't work in ML either. I meant I use ML/DL for my hobby projects. You can build your own face, voice recognition, NLP intent extraction etc. It's a great tool to build stuff.

i want to fuck a anime

>I just don't think most of what's in literature that comes out as deep learning do not have much scientific value

plz

Look at a recent example: eurekalert.org/pub_releases/2016-12/imi-ait122016.php

>Here for the first time we demonstrate the application of Generative Adversarial Autoencoders (AAEs), a new type of GAN, for generation of molecular fingerprints of molecules that kill cancer cells at specific concentrations
>This work is the proof of concept, which opens the door for the cornucopia of meaningful molecular leads created according to the given criteria
>The study was published in Oncotarget and the open-access manuscript is available in the Advance Open Publications section
>Authors speculate that in 2017 the conservative pharmaceutical industry will experience a transformation similar to the automotive industry with deep learned drug discovery pipelines integrated into the many business processes

Right here is a "mere" (no architecture tweaks) "applied" deep learning, and yet it has a large scientific value, not in the fields of CS or ML but in the universally important fields of Computational Biochemistry/Drug Design/Drug Screening. This "mere application of a large network" could save many lives in immediate future.

Compare this to intellectual-mastrubation-tier programming language / type theory research that won't help any human being ever.

Now that programmers and CS people have this powerful ML technology, they have a moral obligation to use it for humanity's benefit.

what a shit world we are living in

cute boy

Has anyone managed to crossdress as her?

Machine learning needs efficiency so why the fuck would you use python.

you're ten years too early to be posting here

>AI in PHP
reminds me of this

English broch

name ONE (1) advantage of Python for machine learning

She's so pretty

Use any language, integrate with azure cognitive services.

1) Simple language with no unnecessary complexity
2) Dynamic language, interfaces with C/C++ well
3) Libraries for everything

Scientists are not autists, they need to do their computing with no fuss.

He used PHP as a mere glue language. Just google it and read it, Sup Forums bans links to fb.

I was surprised that Zuck is still a prolific programmer.

Believe me get fucked It's easy.
There're plenty sites for it.

>her

you described Lisp. what about Python?

AI with PHP works fine. You just need to use the SQL database to build a behavioral pattern

If you want something to interface with a physical hardware, then you'd need the low level stuff like python and shit.

Anzu is the most beautiful human alive.

does lisp even have anything like numpy?

Racket has built in libraries for similar stuff, there's also MatLisp for CL

...

okay, i'm interested

why are you doing this?

I wish to help you OP. Obviously language would be C. Reply to this post with email so we can start the job.
I am serious

DeviousYarn

I would like to know why too.

I second this.

So basically people aren't able to control themselves and their state of mind. Thats unfortunate.

reminds me of that guy who used to browse Sup Forums years ago, and draw red arrows on images for some reason

screw it, i'm dropping fapping to traps right now

anti-faggotry society taboos and catolic church were right all along

God, (s)he's beautiful.

best skellygirl in the business

Sup Forums appROACHes a girl

20 ave maria and your soul will be pure again my child

Fuck, this gives me hope

wut is the biggest thing thats been in anzus butthole senpaitachi

redpill me

Wow, this is David Lynch's daughter?

>those dead eyes
>those dead and empty eyes
Wow.

Well, it will all end soon enough anyway...
The question is: Who will save her?

Wrong. Sup Forums would be hiding in the bushes masturbating furiously. Talking to girls is equivalent to solving P=NP, not gonna happen. Our spagetti will be all over the place.

This really made me think

youtube.com/watch?v=jMlmOZSgSU4