Le I only need 100% multiprocessing le meme

>Le I only need 100% multiprocessing le meme
Serial performance will be always required on interactive applications.
This is will remain relevant unless you never actual sit on a computer and only encode x265.

Naive people have the delusion it's only a matter of progress.
It's not a matter of progress only, it's a mathematical fact you will always require serial performance on CPUs involved in interactivity
This is because interactive application must always adhere to the rules of a global loop and that must protect global variables with inefficient and slow locks if you plan to do any parallelism with them.

You can cry all you want for your next game to be "100% paralleled" but it will never happen, it's how mathematics works.
You will always need strong individual threads not only in gaming, but in anything that requires human input, including setting up your Photoshop before you actually render, which is 99% of the time you actually use a computer.
So next time you plan to type "those games" or "that desktop application" has "lazy programmers" think again, learn to code, or shut up.

>áš­he city of $1700 Intel CPU

Which is why you chose to not be a faggot, you read the OP, and realize the sweet spot is usually a 4c/8t Intel or even 4/8 AMD.
AMD fuck up by shilling first for the 8/16 CPUs. They cause housefires and don't work well on benchmarks people need.
Intel knows what they are doing with promoting first the 4/8 chips.

Filthy kikes on suicide watch.
Also that $1700 POS that is "Intel's HEDT line best" doesn't even support ECC.

Is this the rationalization of a poor person?

Which is why you chose to not be a faggot, you read the OP, and realize the sweet spot is usually a much cheaper 4c/8t Intel or even a 4/8 AMD.
AMD fuck up by shilling first for the 8/16 CPUs. They cause housefires and don't work well on benchmarks people need.
Intel knows what they are doing with promoting first the 4/8 chips.

No, you chose to not be a faggot, you read the OP, and realize nobody in Sup Forums buys $1,700 chips, the sweet spot is usually a much cheaper 4c/8t Intel or even a 4/8 AMD.
AMD fuck up by shilling first for the 8/16 CPUs. They cause housefires and don't work well on benchmarks people need.
Intel knows what they are doing with promoting first the 4/8 chips.

>inefficient and slow locks

You having a seizure son?
Did Intel's ultrafast cores fry your brain?

I can imagine him drooling from his mouth as he stares with his beady eyes at the monitor.

Who are you even talking to, retard.

You keep sharing the same picture without understanding it. It's not against multi threading, Cadence is actually pro-multi threading.

For example compare the performance of their simulator Spectre to their parallel simulator Spectre APS. The difference is huge. Of course for smaller simulations it does not matter, but when the circuit size gets larger the amount of time you spend calculating each step for each node gets more important in a single transient step of the simulation. So if you know which calculations you are going to do in a single transient step you can improve the performance a lot by using multiple threads but what you cannot do is calculating the next transient step before the current one because that's completely against the idea of a transient simulation.

So how does parallelism not matter? Aren't simulations or games getting larger? Don't you have to do known calculations many times at a given timestep in games? It's only the timesteps that can't be distributed, related calculations in a step can be.

Many games these days are using "task" or "job" systems for parallel processing. That is, the game spawns a fixed number of worker threads which are used for multiple tasks. Work is divided up into small pieces and queued, then sent to be processed by the worker threads as they become available.

This is becoming especially common on consoles. The PS3 is based on Cell architecture so you need to use parallel processing to get the best performance out of the system. The Xbox 360 can emulate a task/job setup that was designed for PS3 as it has multiple cores. You would probably find for most games that a lot of the system design is shared among the 360, PS3, and PC codebases, so PC most likely uses the same sort of tactic.

While it is hard to write threadsafe code, as many of the other answers indicate, I think there are a few other reasons for the things you're seeing:

First, many open source games are a few years old. Especially with this generation of consoles parallel programming is becoming popular and even necessary as mentioned above.
Second, very few open source projects seem concerned about getting the highest possible performance. As John Carmack pointed out to the Utah GLX project, highly optimized code is often harder to maintain than unoptimized code, so the latter would generally be preferred in open source contexts.
Third, I wouldn't take a small number of threads created by a game to mean that it's not using parallel jobs well.

Threads are dead, baby.

Realistically, in game development, threads don't scale beyond offloading very dedicated tasks like networking and loading. Job-systems seem to be the only way forward, given 8 CPU systems are becoming more commonplace even on PCs. And you can pretty much guarantee that upcoming super-multicore systems like Intel's Larrabee will be job-system based.

This has been a somewhat painful realization on Playstation3 and XBOX360 projects, and it seems now even Apple has jumped on board with their "revolutionary" Grand Central Dispatch system in Snow Leopard.

Threads have their place, but the naive promise of "put everything in a thread and it will all run faster" simply doesn't work in practice.

I don't know about the games that you have played, but most games run the sound on a separate thread. Networking code, at least the socket listeners run on a separate thread.

However, the rest of the game engine generally runs in a single thread. There are reasons for this. For example, most processing in a game runs a single chain of dependencies. Graphics depend on state of physics engine as does the artificial intelligence. Designing for multiple threads means that you have to have frame latency between the various subsystems for concurrency. You get quicker response time and snappier game play if these subsystems are computed linearly each frame. The part of the game that benefits the most from parallelization is of course the rendering subsystem which is offloaded to highly parallelized graphics accelerator cards.

The main reason is that, as elegant as it sounds, using multiple threads in a program as complicated as a 3D game is really, really, really difficult. Also, before the fairly recent introduction of low cost multi-core systems, using multiple threads did not offer much of a performance incentive.

If you can count on any mathematical experience, illustrate how a normal execution flow that is essentially deterministic becomes not just nondeterministic with several threads, but exponentially complex, because you have to make sure every possible interleaving of machine instructions will still do the right thing. A simple example of a lost update or dirty read situation is often an eye-opener.
"Slap a lock on it" is the trivial solution... it solves all your problems if you're not concerned about performance. Try to illustrate how much of a performance hit it would be if, for instance, Amazon had to lock the entire east coast whenever someone in Atlanta orders one book!

Multi-threaded programming is probably the most difficult solution to concurrency. It basically is quite a low level abstraction of what the machine actually does.

There's a number of approaches, such as the actor model or (software) transactional memory, that are much easier. Or working with immutable data structures (such as lists and trees).

Generally, a proper separation of concerns makes multi-threading easier. Something, that is all to often forgotten, when people spawn 20 threads, all attempting to process the same buffer. Use reactors where you need synchronisation and generally pass data between different workers with message queues.
If you have a lock in your application logic, you did something wrong.

So yes, technically, multi-threading is difficult.
"Slap a lock on it" is pretty much the least scalable solution to concurrency problems, and actually defeats the whole purpose of multi-threading. What it does is to revert a problem back to a non-concurrent execution model. The more you do it, the more it is likely, that you have only one thread running at the time (or 0 in a deadlock). It defeats the whole purpose.
This is like saying "Solving the problems of the 3rd world is easy. Just throw a bomb on it." Just because there is a trivial solution, this doesn't render the problem trivial, since you care for the quality of the result.

But in practice, solving these problems is just as hard as any other programming problems and is best done with appropriate abstractions. Which makes it quite easy in fact.

Different game systems have different levels of difficulty when it comes to to threading. The most difficult part of ANY multithreading is managing data dependencies between threads.
Any kind of sorting can be done in parallel.
Rendering can be naturally multi-threaded because you collect a batch of things to draw, and you can break up the preparation and draw submission work into tasks without dependencies on other data. Physics is similar in that there are phases of work that can be done in parallel.
Gameplay logic systems are generally done on a single thread because gameplay code has lots of cause/effect relationships. Many operations need to be done in a specific order. Game objects can query and manipulate the data of other objects which would lead to random access patterns, data-stomping, race conditions, and hard to find bugs! You can't just use locks/atomics everywhere without really hurting efficiency.
Also many third party libraries do not play nicely with multi-threading, including many scripting languages.

Why lock when you can use SHARED MEMORY security is for faggots.

Multithreading is bad except in the single case where it is good. This case is

The work is CPU Bound, or parts of it is CPU Bound
The work is parallelisable.
If either or both of these conditions are missing, multithreading is not going to be a winning strategy.

If the work is not CPU bound, then you are waiting not on threads to finish work, but rather for some external event, such as network activity, for the process to complete its work. Using threads, there is the additional cost of context switches between threads, The cost of synchronization (mutexes, etc), and the irregularity of thread preemption. The alternative in most common use is asynchronous IO, in which a single thread listens to several io ports, and acts on whichever happens to be ready now, one at a time. If by some chance these slow channels all happen to become ready at the same time, It might seem like you will experience a slow-down, but in practice this is rarely true. The cost of handling each port individually is often comparable or better than the cost of synchronizing state on multiple threads as each channel is emptied.

Many tasks may be compute bound, but still not practical to use a multithreaded approach because the process must synchronise on the entire state. Such a program cannot benefit from multithreading because no work can be performed concurrently. Fortunately, most programs that require enormous amounts of CPU can be parallelized to some level.

your windows gaming use case is not applicable to us professionals who rely on our machines for our living

A common source of threading issues is the usual approaches employed to synchronize data. Having threads share state and then implement locking at all the appropriate places is a major source of complexity for both design and debugging. Getting the locking right to balance stability, performance, and scalability is always a hard problem to solve. Even the most experienced experts get it wrong frequently. Alternative techniques to deal with threading can alleviate much of this complexity. The Clojure programming language implements several interesting techniques for dealing with concurrency.

...

Multi-threading is a bad idea if:

Several threads access and update the same resource (set a variable, write to a file), and you don't understand thread safety.
Several threads interact with each other and you don't understand mutexes and similar thread-management tools.
Your program uses static variables (threads typically share them by default).
You haven't debugged concurrency issues.

A recent application I wrote that had to use multithreading (although not unbounded number of threads) was one where I had to communicate in several directions over two protocols, plus monitoring a third resource for changes. Both protocol libraries required a thread to run the respective event loop in, and when those were accounted for, it was easy to create a third loop for the resource monitoring. In addition to the event loop requirements, the messages going through the wires had strict timing requirements, and one loop couldn't be risked blocking the other, something that was further alleviated by using a multicore CPU (SPARC).

There were further discussions on whether each message processing should be considered a job that was given to a thread from a thread pool, but in the end that was an extension that wasn't worth the work.

All-in-all, threads should if possible only be considered when you can partition the work into well defined jobs (or series of jobs) such that the semantics are relatively easy to document and implement, and you can put an upper bound on the number of threads you use and that need to interact. Systems where this is best applied are almost message passing systems.

IT'S THE AMDAHL POSTER! RUN FOR THE HILLS!

To paraphrase an old quote: A programmer had a problem. He thought, "I know, I'll use threads." Now the programmer has two problems. (Often attributed to JWZ, but it seems to predate his use of it talking about regexes.)

A good rule of thumb is "Don't use threads, unless there's a very compelling reason to use threads." Multiple threads are asking for trouble. Try to find a good way to solve the problem without using multiple threads, and only fall back to using threads if avoiding it is as much trouble as the extra effort to use threads. Also, consider switching to multiple threads if you're running on a multi-core/multi-CPU machine, and performance testing of the single threaded version shows that you need the performance of the extra cores.

Multi-threading is not a good idea if you need to guarantee precise physical timing. Other cons include intensive data exchange between threads. I would say multi-threading is good for really parallel tasks if you don't care much about their relative speed/priority/timing.

Actually, multi threading is not scalable and is hard to debug, so it should not be used in any case if you can avoid it. There is few cases where it is mandatory : when performance on a multi CPU matters, or when you deal whith a server that have a lot of clients taking a long time to answer.

In any other cases, you can use alternatives such as queue + cron jobs or else.

Ridiculously general advice is all you are good for.
Abusing threads is discouraged because they are not an end-all solution and programmers should always profile and remove bottlenecks first and only then use threads as appropriate.
But spamming a generalized "common knowledge" crap without understanding the background of the issue just because you saw it on that one cool webpage once is retarded.

I would ask "why?" first. What improvements do you expect from "threading" on those games. Aren't they fast enough already?

If you see a game that is slow, you still don't know if multithreading would help. For instance if the game is slow because it loads too many polygons to your video card more than it can handle, multithreading won't help you there. You need other kinds of optimization.

Multithreading can actually make things slower if there are more CPU intensive threads than number of cores because of switch overhead between threads. In some cases "controlled scheduling", or "cooperative multitasking" can perform even better than simple preemptive scheduling. That's why there are "fibers" and ad-hoc scheduling mechanisms.

>you will always require serial performance on CPUs involved in interactivity
This is utterly false. Interactive program spends vast majority of its time doing nothing waiting for user input. You need 0 (zero) performance for that. Actual cpu heavy tasks that home users encounter are almost exclusive to gaming, video processing and encryption, all of which can perfectly be parallelized, and everything else is easily handled by any single core on the market.

You need to think, what are the actual benefits of threads? Remember that on a single core machine, threads don't actually allow concurrent execution, just the impression of it. Behind the scenes, the CPU is context-switching between the different threads, doing a little work on each every time. Therefore, if I have several tasks that involve no waiting, running them concurrently (on a single core) will be no quicker than running them linearly. In fact, it will be slower, due to the added overhead of the frequent context-switching.

If that is the case then, why ever use threads on a single core machine? Well firstly, because sometimes tasks can involve long periods of waiting on some external resource, such as a disk or other hardware device, to become available. Whilst the task in a waiting stage, threading allows other tasks to continue, thus using the CPU's time more efficiency.

Secondly, tasks may have a deadline of some sort in which to complete, particularly if they are responding to an event. The classic example is the user interface of an application. The computer should respond to user action events as quickly as possible, even if it is busy performing some other long running task, otherwise the user will be become agitated and may believe the application has crashed. Threading allows this to happen.

3D games create a programmatic model of the game world; players, enemies, items, terrain, etc. This game world is updated in discrete steps, based on the amount of time that has elapsed since the previous update. So, if 1ms has passed since the last time round the game loop, the position of an object is updated by using its velocity and the elapsed time to determine the delta (obviously the physics is a bit more complicated than that, but you get the idea). Other factors such as AI and input keys may also contribute to the update. When everything is finished, the updated game world is rendered as a new frame and the process begins again. This process usually occurs many times per second.

We can see that the engine is in fact achieving a very similar goal to that of threading. It has a number of long running tasks (updating the world's physics, handling user input, etc), and it gives the impression that they are happening concurrently by breaking them down into small pieces of work and interleaving these pieces, but instead of relying on the CPU or operating system to manage the time spent on each, it is doing it itself. This means it can keep all the different tasks properly synchronized, and avoid the complexities that come with real threading: locks, pre-emption, re-entrant code, etc. There is no performance implication to this approach either, because as we said a single core machine can only really execute code linearly anyway.

Things change when have a multi-core system. Now, tasks can be running genuinely concurrently and there may indeed be a benefit to using threading to handle different parts of the game world updates, so long as we can manage to synchronise the results to render consistent frames. We would expect therefore, that with the advent of multi-core systems, games engine developers would be working on this.

>This is like saying "Solving the problems of the 3rd world is easy. Just throw a bomb on it."
Isn't that america in a nutshell though?

What the fuck are you talking about? Do you even understand the basic concept of a game engine? An extremely simple move of the mouse entirely changes the whole scene and that by definition creates an entire conglomerate of variables that can not be static and are now entirely depended on a live fashion on user input.
Most people don't get that games don't even need that much multithreading to begin with.
GPUs do most of the work at that and most GPU drivers don't play well with CPUs.

How do cores communicate?
Since you claim that it's only a "maybe" that performance can be improved this way, I must assume the primary barrier is a failure to communicate, causing the equivalent of a video with the sound delayed, but even worse, since it's an interactive real time game.

Shouldn't processors have some kind of central node that they all relay off of and therefore allowing for synchronization via a database?