Graphics Programming

Is anybody working on any graphics related projects?
What API are you using?

Other urls found in this thread:

youtube.com/watch?v=xW52oDPzrE0
github.com/OllieReynolds/GL_PARTICLE_SYSTEM
github.com/SaschaWillems/Vulkan/blob/master/triangle/triangle.cpp
software.intel.com/en-us/intel-rendering-research/publications
twitter.com/id_aa_carmack/status/767911253763170304
pastebin.com/10ibeXPy
twitter.com/AnonBabble

Can you use AA for this picture please? The jagged edges make me autistically mad.

OpenGL.

pic. related

I lost that engine when my HDD died, but it was written in Java so it was shit anyway.
I'm working on a new one but it's not finished yet.

Nice pic.

Why do you do this instead of using opengl?

I made this thing a while back in OpenGL as an experiment.

nintendo plz dont sue

I found a screenshot of my old piece of shit.

Jesus Christ, OpenGL really is a nightmare.

OpenGL in Java is the nightmare.
Rendering nothing, it used 150 MBs of RAM.

Java is the nightmare.

OpenGL is just ugly.

The fact that you have to indent code in an unnatural way just to make sense of the OpenGL calls is terrible.

That's the old API, it was very low level.

Vulkan still makes you do weird indenting.

Vulkan is actually the low level API here.

OpenGL here.

youtube.com/watch?v=xW52oDPzrE0

I was messing around with compute shaders and shader storage buffer objects last time I used it. Trying to achieve 2^20 particles at 60fps, but right now, I've managed only 2^19..

Not a good example at all of how to implement it, but the source as well github.com/OllieReynolds/GL_PARTICLE_SYSTEM

Nice.

I haven't really messed with compute shaders, but they seem interesting.

oy vey the classes

I really like how easy they are to set up in OpenGL. If you've already done all the main buffer binding and context creation, then can you just call glDispatchCompute with however many work groups you want, and the driver just does it! The shaders are as easy to write as all the other shader stages, with a tiny few more things to consider. But data flow to buffers is all the same.

This pic helped me.

Have you ever worked with Vulkan before?

I haven't touched it yet - most of my personal graphics toolchain revolves around GL 4.5 and I'm too lazy to learn a new API.

Vulkan is literally the legs of OpenGL.
In order to do anything, you need to give Vulkan a command buffer to execute inside one of it's multiple queues. Some queues support some operations while others do not, and which queues support what is dependent on the GPU's manufacturer.

It's a pain to work with but it gives you full control over the GPU. It can be more efficient than OpenGL if you use it right.

Context in Vulkan is attached to the instance handle. This makes it so you can execute Vulkan functions from any thread given you have the instance handle. It's a really neat API to learn. Very fun to work with.

How did you learn how to create compute shaders?
I can't seem to find any simple tutorials

It's a data movement exercise - the actual shader you could write in C to prototype and then move it.

For me it was

#1 Write a struct to represent my particle
#2 Init 500k random particles CPU side
#3 Put that into a SSBO
#4 Set up a VBO with Vertex Attrib bindings for rendering computed Compute Shader results, WITHOUT needing to read back to CPU (this is important for frames!)
#5 Create your render shader prog and your compute shader prog
#6
compute shader ends up having the same declartion as the CPU code, but it's cool since we'll pulling some nice frame rates - kinda like this
struct Particle {
vec2 position;
vec2 velocity;
float scale;
float mass;
};

layout(std430, binding = 0) buffer Particles {
Particle particles[];
};

layout(local_size_x = 128) in;

#7 None of this is worth it if you're not having fun

Can you pass variables into individual kernels, or does it have to be the whole compute group?

You can choose from atomic counters, uniforms, or like 3 types of buffer object. Rather than pass a unique value to each invocation, you can retrieve a uint which gives you n between 0 and however many compute jobs you launched

what is local_Size_x used for? That's the only part I didn't quite get in compute shaders. Otherwise it's just extended uniform buffer objects that can be processed independently from rendering steps etc. but what is local_size_x and do we have barriers, atomic operations if the computation is not fully parallelizable?

OpenGL(3+) is actually pretty comfy, as soon as you have a working context.

I'm making an image viewer, I haven't been active lately because I'm researching interpolation.

No, I'm too retarded for math.

No you're not. Math is only a language, and so the only reason you struggle is because you haven't put the effort in to learn it.

there IS AA in this picture you retard. Its just the shadow map size too small.

> LWJGL + Java + OpenGL
Disgusting.

any good tutorials or guide on getting started in this stuff?

I got a basic triangle working on android.

I'm making a Runescape clone. Gonna make my own model editor and map editor for it in Java as well.

It worked for Jagex, it'll work for me.

To be fair, old Runescape ran at like 2 fps and the new Runescape client has been ported to C++ because it was too slow.

Minecraft was also written in Java and was also slow as fuck and eventually ported to C++ for the Windows 10 edition so you might save yourself a lot of hassle if you use a C variant from the start.

I've been looking into Rust since I'd like some handholding when I do C++. Would be cool to use the vulkan api with rust.

Funny, I implemented something extremely similar with OpenFrameworks a while back.

That's the size of the work group.

So for a particle system of 524,288 particles, CPU-side I would call

glDispatchCompute((particles.size() / WORK_GROUP_SIZE)+1, 1, 1);

to execute 4096 work groups, and WORK_GROUP_SIZE is whatever value you set for x on the compute shader.

retard

I'm on my phone now so no images but I'm working on octree geometry like in Cube 2. It's not as hard as it looks to implement.

Working on a an edge detection post process shader in Unreal 4. Working pretty good but misses some edges in the difference if the normals are aligned. I can actually just activate TXAA and the edges look really smooth already, but i'll idealy blend this heard edge shader with a smoother edge shader to get crisp but smooth lines that would completely replace the need for AA.

The best part is the speed. It's using the same concept that Sobel detection uses, but each pixel is compared to only 4 surrounding pixels. There's no trig or matrix functions either, just arithmetic and dot products.

>To be fair, old Runescape ran at like 2 fps and the new Runescape client has been ported to C++ because it was too slow.
the sever ticks are 60ms, the client is 60ms plus latency
the oldschool/2007 client itself runs at 30FPS without problem, even on POSes, but events only occur every server tick

Is there a certain way that vertices need to be read into a buffer? I've been having some trouble understanding how VBOs work.. newfriend to OpenGL.

That's pretty cool user. Keep it up :)

>UTfags need to do this

You could do this 10x more efficiently in OpenGL with stencil buffering.

Great advice user.

One thing to keep in mind is interleaving vertex data so you can have one buffer for uvs, norms and positions, this is one thing I didn't do as a beginner.

what does the VAO do?
does it store all those vertex attribute pointer things?

Basically yes. When you draw a VAO it's pretty much just
>glBindVertexArray(my_vao);
>glDrawElements

what's so bad about glDrawArrays

bitch did I say anything was wrong with it?
If the vertices are indexed, you use gldrawelements, if it's not(a particle billboard for example) then you use gldrawarrays

Has anyone made the switch from OpenGL to Vulkan? If so, how is it? Is the transition smooth?

Also wondering how much of the new advertised freedom vulkan allows people are actually utilizing instead of just copying code from tutorials to mimic OpenGL.

OpenGL. now learning raymarching, written little engine/framework which automates models/shaders/etc loading. check iquilezles.org really good source of learning

>vulkan
>fun to work with

user, why would you lie on the internet?

The outline on the sphere-box object floating in the center is actually using a stencil buffer. The limitation of the stencil buffer is that you can't detect edges on the inside of the object, that's why there's only an edge surrounding the two objects. I would have to make a separate material for each side of the cube for a stencil buffer to detect those edges.

github.com/SaschaWillems/Vulkan/blob/master/triangle/triangle.cpp

>all this code for a triangle

Jesus fucking christ, learning this is not going to be fun.

Is that an Autodesk product?

Is it technically possible to render to a cubemap, since you can render to texture?

BLACKED
L
A
C
K
E
D

It's 99% setup stuff that can be reused fortunately.

My only problem with Vulkan right now is getting a surface lost error for completely random fucking things. Like I can put braces around some code to reduce the scope of some variables and it'll cause the surface lost error 100 lines later, even though the surface wasn't even created where the braces are, so I've put it on ice for now and going to keep working on my OpenGL stuff instead.

You still need to bind the VBO as well

I made the mistake way back of thinking buffer bindings were part of the VAO state so I thought that might be worth pointing out

Used to use Povray.

But that's wrong. If you're just using drawelements or drawarrays you do not need to bind a buffer, just the VAO. glbuffersubdata and glbufferdata though need a buffer to be bound.

how important is reducing memory usage on iGPUs, since they generally use shared memory? Like instead of storing the vertices of a sphere should I use a geometry shader to define the vertices of the sphere? I hope that makes sense.

software.intel.com/en-us/intel-rendering-research/publications

Full of interesting techniques and info. A must see for the advanced graphics programmers, shit's invaluable

wrote a raytracer with OpenTK, a C# wrapper for openGL. Most shit was done by the cpu so it was slow as fuck, but damn the quality man...

>Is anybody working on any graphics related projects?

Yes

>What API are you using?

OpenGL 1 using the OpenGl->OpenGLES shim that's in XScreenSaver. It's not a library yet, it took a little bit of work (not much) to get it working for me.

Now I can code OpenGL C programs and have them run on Linux, Android, iOS, Windows and Mac.

I also do things with the SDL library sometimes. I use the old one, they threw the code away, the API away and relicensed it, but instead of calling it a new project they call it SDL 2. I still use SDL 1.

Post some images - I want to see that quality.

Learning openGL.

All the tutorials I've seen focus on small 3d models, or primitives. How do you import a larger world model, like terrain? Is that a 3d model too or is it imported in another way?

I wish I could do a lot of this neato shit. I'm terrible with math though so it's all a pipe dream.

Depends on what file format it's in

.obj is so simple you can probably write your own importer or find a well supported one on Google

Once you load it it's just vertex positions, normals and UVs like the quads, triangles and cubes you're probably using right now.

Don't use obj or any of those formats. I recommend making a simple program that takes an obj file, interleaves it and sorts it and fwrite the raw data into your own custom file format for efficient file size and fast loading, then you can just fread from your custom format.

twitter.com/id_aa_carmack/status/767911253763170304

He wasn't terrible at math, he just didn't bother to study it on uni level.

on a related note though, I THOUGHT I was terrible at math until I started goofing around with programming

Same here. I was a fucking retard at math in school but can breeze through most uni tier math books easily these days.

Opposite, it was high level. Opengl relies on a graphics server to act as a state-machine resulting in a fuck ton of function calls to set parameters.

Why compute shaders? Why not Transform Feedback? It has earlier support than compute and can do the work in the same pass as the a the vertex shader.

Nobody gives a shit about performance anymore.

Has anybody done any work with DirectX here?

how do i into opengl? i want to be able to use freeglut and glew but i can't really how to make a context or view

learnopengl.com

I'm learning unity after doing some work from scratch and it's annoying me sometimes

is unreal better?

It's harder to shit out shitty games with

Which is basically equivalent with saying it makes better games because Pajeets can't handle it

interesting - how does it do with 2D games?

use GLFW to manage context and open windows
just 3 simple calls:
glfwInit();
GLFWwindow* window = glfwCreateWindow(1366, 768, "A GLFW Window", NULL, NULL);
glfwMakeContextCurrent(window);

anyone knows SIMD?

That's literally just boilerplate

Found some loli and shota models so i'll make a WebGL 3D CP MMORPG

I made some notes when I was learning OpenGL, tell me if there's anything wrong with it.

pastebin.com/10ibeXPy

Has anyone used bytes for normals? I'm thinking if a normal is in the range of -1 to 1, why is everyone using a floats for normals? Seems overkill.

I keep seeing in tutorials people are making GL_RGB32F framebuffers and storing colors as floats instead of glubytes, just a completely unnecessary waste of memory. Thread looks pretty fucking dead.

Jesus christ.

>interleaves it and sorts it
why?

.obj has a retarded way of handling indices, if you've written a loader for it you probably know you need to sort the vertices

I don't have that much experience with OpenGL. But why do you need to sort the vertices?
Doesn't it not matter what order you draw the vertices in?
What do you sort the vertices by?

stealth sunibee thread

how the hell do you debug a program that locks the cursor?
Why doesn't gnome give the cursor back when you alt tab? Is it seriously incapable of doing even that?