Ask a Philosophy professor anything future-technolo/g/y wise anything

Ask a Philosophy professor anything future-technolo/g/y wise anything

Other urls found in this thread:

fauux.neocities.org/
en.wikipedia.org/wiki/One-Dimensional_Man
twitter.com/SFWRedditGifs

Why do you think anybody would care about what a philosopher has to say about technology?

Came here to say this.

It inspired the post at least so there's at least one contribution.

you are both stupid, philosophy is the essence of all science

Prove to me that you are a philosophy professor.

I assume you've read Symposium, so tell me, 1) what is the name of the young man who according to Socrates offered him self sexually to him, and 2) why did Socrates accept?

What do you think of people uploading their consciousness to an Android or computer?

You're too kind, user!

>using tumblr images instead images from the actual website
kys

Forgive my sins.

Is it possible to create a set of questions that people can answer so that a government can run algorithmically based on these questions? Are there a set of questions that describes "political intent" in a good way so that the whole government can be automatic?

That way people would have MORE democratic say in how the government is run and there would be no need for elections. People would only need to update their questionaire whenever they change their mind on something.

What fundamental meta-rules would need to exist for such a government to function?

As long as the android or computer SEEM like a good enough approximation by copying the brain of the person and as long as the process kills the original host. then it is "good enough" for all practical purposes. Just program the android/computer to always pretend to remember everything from before in a continual fashion just as if the "mind" continued seemlessly. As long as the family is superstitious enough to think that it is possible then you can keep the money.

Lain sucks.

I'm sure theoretically there would be a way for that to happen. That creates some more questions:
1: Who would be the "candidates" for said governmental positions?
2: What exactly would get done if there was any kind of mass-idea switching, leading to gridlock in legislation? (Assuming there is still legislation)

As far as rules, a fully transparent process of where this "questionnaire" goes to, it would require a level of trust that the process is actually going through.

Tl;dr: It basically all comes down to the power structure or lack there of.

I don't believe you're a philosophy professor, I believe you're a first year philosophy student.

Was this guy right?

Flat out incorrect.

Prove it then

what's Positivism and why should i even care even?

Philosophically and sort of politically, yes.

Execution wise, quite flawed.

okay, so once upon a time cells started to collaborate, and they later became "one" as in human (for example), humans started communicating just like cells and nerves communicate inside the body, the data is sent to google where it gets processed, then google tell people what to do to coordinate, just like a human body. IS THIS A NEW LIFE FORM? everything seek to be "one" it seems, viva la communism!

You should not because it is the weakest argument for how a society should be governed AND/OR how to justify power-systems. I've yet to meet and other Philosophy teachers/professors or students who subscribed to this notion, though I'm sure some are out there, especially in this environment.

nice mathematical pattern you found there!

>I'm off my meds!
>Lainposting intensifies.

In all seriousness, you sort of have a point. If you view it purely as a collective then you could describe it as a type of life-form. Much like Mitochondria is to the cell is to the organ is to the body.

>Flat out incorrect.

>Can't answer this

/thread

You tried to be funny and edgy here: and failed. :)

How do you feel getting paid to prepare students to become unemployed?

Where do these cool Lain gifs originate?

It's been a while but wasn't it Alcibiades or some shit like that? I prefer Lacan.

Why do you expect us to care about your dubiously-professional philosophical opinions when you can't even use English correctly?

fauux.neocities.org/

Well I don't get paid much either but it's what I like. So we're kind of all in the same boat.

pos·i·tiv·ism
ˈpäzətivˌizəm/
noun
Philosophy
noun: positivism

1.
a philosophical system that holds that every rationally justifiable assertion can be scientifically verified or is capable of logical or mathematical proof, and that therefore rejects metaphysics and theism.
a humanistic religious system founded on this.
another term for logical positivism.
2.
the theory that laws are to be understood as social rules, valid because they are enacted by authority or derive logically from existing decisions, and that ideal or moral considerations (e.g., that a rule is unjust) should not limit the scope or operation of the law.

what is so hard to accept about positivism

What are your thoughts on the philosopher Lil Ugly Mane?

It's not that I don't "accept" Positivism, just I disagree with it being a good idea or school of thought. Much like most of Bernards ideas.
>valid because they are enacted by authority or derive logically from existing decisions
^This.

Kek

Just delete this thread before you embarrass your self further.

I prefer the teachings of Yung Lean.

Do you think it's possible for us to subconsciously manipulate random number generators? Because I swear sometimes I predict the outcome of rng, and it's not that I'm always guessing and I happens to be right occasionally, rather sometimes I get this feeling that I know what the result will be and when I get this certain feeling, it's always right. So it's either we can somehow manipulate outcomes, or predict them.

en.wikipedia.org/wiki/One-Dimensional_Man

My first and last read about philosophy. It's really hard but interesting read.

>1: Who would be the "candidates" for said governmental positions?

None, just the machine adapting according to the public will. Any government position is chosen by the machine I guess...

>2: What exactly would get done if there was any kind of mass-idea switching, leading to gridlock in legislation? (Assuming there is still legislation)

What do you mean gridlock? People refusing to implement the rules that the machine makes? That can be solved in many ways, robots or drones might be sent to remove these people. (maybe they can be made into protein supplements for cattle or zoo animals or some other utilitarian and logical use)

Mass-idea switching would be no problem. A mass switching would just mean that any future choice president AI makes would be according to this new idea-set.

For example if the public one day thinks taking care of poor people is a good thing then the president AI implements policies to take care of the poor people. And then the next day maybe most people think "Poor people should suffer" then President AI takes that into consideration next time it makes a policy choice regarding poor people.

Of course what the AI can and can not do would be decided by the "meta-rules" I mentioned earlier. A sort of constitution, maybe making people who don't do their job into animal food should be avoided. Maybe the rules should be made so that the AI don't hurt the public. But what then about criminals?

How would the ideal AI-government look like from a Rawlsean "Veil of ignorance"?

>As far as rules, a fully transparent process of where this "questionnaire" goes to, it would require a level of trust that the process is actually going through.

Is there a way to do this with a minimum of trust?

...

If you believe in parallel universes or time lines, then perhaps, though that's not 100% philosophy, still is interesting.
I must have mis-read your original post, I thought it would still go to "humans" to rule, not an AI, that changes everything.
IMHO, the ideal AI-government would like be very similar to the "Philosopher King" style of government, as long as law and order can be maintained. It's a cruel world, but it's one I believe an AI would prefer.

GPL is commie or not?
:^)

>IMHO, the ideal AI-government would like be very similar to the "Philosopher King" style of government, as long as law and order can be maintained. It's a cruel world, but it's one I believe an AI would prefer.

That would not in any way be anything like the AI I describe where the AI actions is an amalgam of the public at large (or those who care enough to fill out the questionaire)

Or do you think the public at large would like a philosopher king style of government so that the AI is forced to follow that government style?

The AI would want a Philosopher King style of government if it acted on it's own.

However, with your questionnaire idea, I think it could work out very well.

Conversely, do you think the public would want an AI in any shape or form in their government ruling for (or, over) them?

You have to remember, one entity never runs a nation, no matter the political layout. Full AI government wouldn't work unless it could fully support itself without people involved, both technically and practically.

Would your AI create delegates?

Cruz will give them tho.

At a minimum, and the army isn't going anywhere.

Would your AI and it's delegates cover the world or just a country?

How would the AI assist with diplomatic disputes? Most people are too stupid to give the AI the best ideas, so "questionnaire uploading" might need to be restricted.

I don't think that matters for the government to be set up or interact with other countries. It would need to be able to converse and negotiate and parse out bullshit regardless, its goals would probably just be unusually utilitarian. That effects the questionnaires as well, although I didn't bring that up originally.

>Conversely, do you think the public would want an AI in any shape or form in their government ruling for (or, over) them?

If it was a well sketched out idea. That is why we have philosophers so you can do all the detail work on these things and let the rest of us chumps shill the ideas.

Nobody is happy with the so called liberal democracy. It might evolve in a fachist direction or in some other direction. Because due to the Dunning Kruger effect people usually think they would do a better job than the politicians. The people are not exposed to the details and intricacies of government so they think they will do a better job.

So EITHER we need to make the public feel more empowered in some way. (an AI COULD potentially do that if the idea was fleshed out better)

Or people are going to just keep being discontent and do all sorts of crazy choices.

Sure, but the AI government COULD manipulate people to work for it by using money, titles, awards and that sort of thing in a similar fashion to what humans do. The prestige of getting an award might be magnified by the seemingly objective nature of a computer. It would be seen as some purely objective choice and might promote a sense in the person that it's a good system and needs to be protected. This is of course speculation, I have no idea about psychology. But the AI can theoretically have a good model of the human psyche that it could use to manipulate people. Also military robots and drones might coerce people without the need for such a sophisticated model.

>Most people are too stupid to give the AI the best ideas, so "questionnaire uploading" might need to be restricted.

Which is why it is a questionarie with preset questions. And in my view the questions should show an attitude. All political choices are value based and not logical. The attitude of a person could be asserted by the questionaire, which would inform diplomatic disputes as an amalgam of the people. Some sort of average attitude.

The AI would not consider any ideas from people at all and would just develop ideas based on the average value judgements of the people. The whole idea of the questionaire is to inform the idea creation itself so that it does not rely on the intelligence of the people. Only in the attitude and what the people would like and dislike. The public taste if you will.

Dictatorships have crap incentive models for the general populace, military control wouldn't work for as long as people were involved in production of anything important in sustaining society.

For a democracy, your idea would work out very well, I just don't think a democracy is the ideal form of government, though that also depends on the society it is run on.

For the record, I side with Lacanian philosophy on most things, that might explain why I'm having trouble adapting this into a democracy.

That idea would work out well for broad concepts, but for vague concepts you'd have to let it decide based on the "general attitude" of the populace, which could have VERY interesting results, I'd be for it to see what happens.
Correct, so not a dictatorship, more along the lines of a easily topple-able monarch AI.

>easily topple-able monarch AI.
that doesn't really fit the scenario.
AI would only work when the robots run basically everything else, which would really prevent toppling.

Hey OP! Should I go for BA in philosophy?

Depends what you want to do for a career. Teach? Local government? Advisory position for government or a company? Sure, that or a minor in it.

If you're in the U.S, join the APA.

Agreed, I'm talking from a foreign invasion, not internally, but you're correct.

>Dictatorships have crap incentive models for the general populace, military control wouldn't work for as long as people were involved in production of anything important in sustaining society.

What are you talking about? It has worked for 100% of human history. The only problem has been when the military don't want to oppress the people anymore. (for example in the french revolution when the millitary changed sides and so on)

As long as robots is used the government can be as brutal as it wants. A human millitary would be useless, but even today it's possible to make robot weaponry.

>For the record, I side with Lacanian philosophy on most things, that might explain why I'm having trouble adapting this into a democracy.

I am glad that you haven't adopted his writing style. Due to Lacans obscure writing style I have absolutely no idea what your position is on anything at all. I have no idea what you mean when you say that you side with Lacanian philosophy. Is it similar to other Frenchies like Derrida? I know Zizek was a student of Lacan, is it similar to him? Is it also fundamentally Hegelian? (I hope not, because I have no idea what Hegel means 95% of the time either. I don't even understand his "Philosophy of History" lecture even though I have read it several times.)

>That idea would work out well for broad concepts, but for vague concepts you'd have to let it decide based on the "general attitude" of the populace, which could have VERY interesting results, I'd be for it to see what happens.

Yes, and if this idea was fleshed out more, maybe some idea HOW such a decision would be made, then we would be able to model it. What would a good action-recipe be? Would a slight random action be best? Or would a straight forward predictable one be best?

> Is it similar to other Frenchies like Derrida? I know Zizek was a student of Lacan, is it similar to him?

Yes, yes and yes. It's probably more similar to Derrida than the others. Zizek is the meme philosopher, but he's got some great points and is based on Lacan for sure.

I think a random action would be best in theory, to test the limits of the AI and how humans react to it. BUT in practice you'd want to play it safe and be direct and straight forward, sadly.

>I'm talking from a foreign invasion
Are you assuming that the AI wouldn't be distributed? because that's a bad assumption.
>What are you talking about?
The government has no reason to spend money on the populace beyond keeping them from starving, so they can spend the money on the people that actually keep them in power, like the military and so on.


>As long as robots is used the government can be as brutal as it wants.
It woudn't want to be, assuming it's trying to maximize utility for the people it governs, which is a good assumtion as it has no other worries besides the welfare of its people.

>It woudn't want to be, assuming it's trying to maximize utility for the people it governs, which is a good assumtion as it has no other worries besides the welfare of its people.

Well, it also need to worry about staying in power I guess. And brutality have been a go-to strategy for that sort of thing.

If it controls the means of production in the most efficient way (which is the only way it would have it anyway), then it doesn't have to worry about the interior.
The exterior, sure. But it wouldn't pick fights unless it was economically viable and provided better longterm outcomes.

This was way better than I thought it'd go, I'll do this again sometime. I'm out for now. :)

I like the AI idea as both a practice and as a thought experiment btw.

>If it controls the means of production in the most efficient way (which is the only way it would have it anyway), then it doesn't have to worry about the interior.

Is this true, Professor OP? can you and your Hegelian Dialectics comment on this?

>But it wouldn't pick fights unless it was economically viable and provided better longterm outcomes.

What if the public was like the American people and totally blood thirsty and war hungry? The AI would be compelled to start wars every now and then to keep the populace happy and "patriotic".

I don't know if you watch YT, but CGP Grey's rules for rulers is topical with this, and I pulled a lot from it for this thread.

People want war to work off anger, which woudn't exist on a large scale in a society that would have all its needs cared for without working.
New problems and paranoia could crop up, but war would almost never be the solution.

Have you been lain shitposting on Endchan recently?

Fuck off.

he's been gone for an hour, user.