Besides polygons what else effects games, frame rate etc, How is a game poorly optimised?
Besides polygons what else effects games, frame rate etc, How is a game poorly optimised?
Poor use of texture space / too many high res textures. Too many skinned meshes / lots of vert deformations. Fancy screen-space effects like refraction. Shadow map resolution is killer and has to be kept under control. Number of unique meshes and materials causes poor batching. Basically anything that requires lots of loading and unloading to memory can shaft frame rate.
Shadows.
Too high sample rates on post-process effects like motion blur, bloom, dof, etc.
Too many particles or alpha blending.
Everything boils down to exponential algorithms managing memory
Lighting, post FX, textures, shaders, lod, particles
what do you know already? Your question is very general.
I know a lot of polygons can affect framerate, I play games, I don't know shit all about them.
the main problem with a lot of 3d games made by indie studios with no experience in the world of 3d game development is adding a lot of calculations to the CPU without multi threading or offloading to the GPU
and instead of scaling back on the number of computations per tick by simplifying the code it means for some graphically unimpressive games there can be a massive framerate drop
if a game is well made, you can have visually photo perfect graphics without even using the full capacity of the GPU, Crysis and other graphically impressive games have shown us this much
I suggest you take a trip down memory lane. Simple reason is that all the stuff used in a modern game has been introduced bit by bit, and you can see the impact it has.
While almost everything in a modern game is made of polygons, their number isn't much of a problem nowadays. What causes far more trouble is coloring them (shading). Modern shaders do a lot of computations per pixel. It works, because modern GPUs are small clusters of hundreds, sometimes thousands of small "CPUs". Each of them is working on a single pixel at a time. That works, because the color of a pixel is (usually) not affected by the color of an adjacent pixel. The impact of shading is so extreme, lots of games are applying some special tricks. Instead of doing all the work in 3D, they compute lots of intermediate values in a buffer, and then use that buffer to draw the final pixels. This saves a lot of work, because in 3D there is a little bit of a chance that computations are "wasted", because the GPU will replace a previously colored pixel with a different one. Anyway, all this stuff is super simplified. Real-time computer graphics are high end science shit.
As for optimizing, I'm afraid it's a buzzword. There are different ways to compute the same thing.
The first, and most simple variant is called "brute forcing". It basically means you think of the problem as simple as possible, then write the code to solve it like that, and fuck performance. It works, but it's often quickly very useless.
For many problems smart folks developed special algorithms, programs, that will solve the same problem but use fewer computations. In a way, that is an optimization, trading ease of reading the code for execution speed.
That only works up to a point though. Beyond that, you got to be more creative. Do you really need the exact solution to something, or is a rough value "good enough"? Often you can save a lot more computations by going for such an approximate result. Very often these algorithms can become very complex, with a lot of extra steps, but it's worth it, because with enough work, it performs better.
What else can we do? Well, sometimes you don't have a powerful CPU, but lots of available memory. In such cases, instead of computing certain things over and over again, you compute them once, store them in memory and re-use them.
This is also an optimization, and by example it highlights an important aspect: You trade one property (cpu cycles go down) for another (memory consumption goes up). As such, it's really difficult to call something optimized or unoptimized. Things can be optimized for different aspects. Even brute force code is optimized. It runs like shit, so it can't be performance. I mentioned it already, it's readability. When you got to maintain a piece of code, need to make sure everything is correct, and performance is not that important (for example security), you will "optimize" for readability and fuck everything else.
Games usually strife to optimize for a balance between memory, cpu, ease of development (engines cost money) and a bunch of other factors. Cheap or low budget studios will also "optimize" for budget, i.e. not invest too much money into making the code run as perfectly performing as possible. They don't have the budget for the high end visual assets anyway. So you end up at a sweet spot where the engine is good enough to run the visuals you can afford on a reasonable system. It's not as shiny as the latest AAA, but it's optimized, just for something not obviously visible.
Polygons are pratically the most inexpensive use of processing power
Poor coding
Texture resolution
Render resolution
Shadow resolution
Anti aliasing
Effects
Post effects
Lighting
Particles
Draw distance
Shaders
>optimizing is a buzzword
i can't decide how i feel about this.
DirectX is too abstract and doesn't allow proper usage of GPU. Also tons of coders are shit, follow shitty coding practices and encourage them among others.
...
Games nowdays are just a bigass assembly line.
You have like 1000+ faggots working in your game all over the world, but people there barely know about it or are doing the job right. It's "cheaper" and quicker than hire people that knows about about coding, modelling and etc.
Shove some millions in marketing and bam, instant AAA game.
I know that this image tries to make it so polygons actually do matter in the long run but it doesn't do much.
Both 2k and 20k poly models look like 20% improvement unlike the 200 to 2k jump whichh is like a 200% improvement, the image is right but thhe point of the old image is still proven, the improvement is relative not additive.
Good thread, guys, thanks
>take mario from nes
>increase resolution from 16x12 px to 640x480 px
>now there are just 40 pixel blocks of the same color
>why does it still look like blocky shit, increase in resolution is literally worthless
what kind of trickery was the original example trying to pull?
>that pic
Dunning-Kruger effect is real
op's pic shows lod, not diminishing returns
It's always particle effects, usually just too fucking many
aquamark ptsd?
It's called turbosmoothing. If you take a model and turbosmooth it, it just increases the poly count without adding any actual fine detail unless you're smoothing rounded surfaces. It was really popular for shitty half-life mods to turbosmooth shit.
I still never understand why those faggot ass devs never optimize the engine for particles and Lighting.
Seriously, lighting in most games are just prebaked shit or just shit.
also every ARPG ever made
All this talk about diminishing returns yet we can't even render the shit that's given to us today.
>optimize the engine for particles and Lighting
what do you expect them to do?
>lighting in most games are just prebaked shit or just shit
prebaking light is an optimization (performing the expensive lighting computations upfront and storing them in a lookup structure)