Why isn't there a web browser written specifically for CUDA or OpenCL for maximum parallel multi threaded performance?

Why isn't there a web browser written specifically for CUDA or OpenCL for maximum parallel multi threaded performance?

Not just GPU acceleration, but native.

Because modern browsers are fast enough, network IO is the main bottleneck, not CPU performance. No matter how many CUDA kernels you run, network is still going to be the main bottleneck.

because parsing strings is not what GPUs are for?

Because browser tasks are not parallel, duh. It would most likely be even slower than what's currently available.

GPUs are good for ridiculously parallel loads (per pixel processing being such a task). I'm not sure what you think browser devs aren't using the GPU well for.
From my POV most of the effort has gone into optimizing their Javascript engines to handle crappier and crappier code (because webdevs only seem to deteriorate in performance).
>network is the bottleneck
I look at the performance using the inspection tools sometime seen i find slow pages. Usually it's some kind of just some JS that's being run that implies requests after a second or two. I'm not sure of if these problems are universal in some way for every visitor or if it's just me. The actual data transfers are of trivial size. Well less than a megabyte usually. It's possible that these pages just make requests in a blocking fashion. Waiting for content to process more content. It'd be misleading to claim that that's a transfer rate problem. It's a latency problem mostly (since every one of these waits for a requests would add the ping to your load time).

10 years ago I would not have expected them to be this bad. They're not resource constrained in any significant way for their tasks (excluding fringe cases).

No Thanks.
It means that Javascript tards could continue this bullshit

>lets use 500W for web browsing

The problem isn't bandwidth, it's, like you correctly identified, latency. Network requests are slow, and especially when they are sequential (for example an AJAX request requesting something from the database and then another AJAX request after that requesting info about a particular item).

Note that I said nothing about data rate aka bandwidth, I said network IO. Compared to bandwidth, which has grown from a couple of kilobits to several gigabits in a matter of 20 years, the latency is more or less the same because you can't go faster than the speed of light.

However, bandwidth, or rather bandwidth over time aka throughput, may also still be a limitation, e.g. for downloading and streaming.

I agree with you that this should be pointed out though.

Average client web browser begin shitty intel GPU.

Few use modern CUDA, CUDA begins more long computation process over low latency web.

Mozilla team want implement a lot optimizations in rust to render but maybe in one or two years.

Didn't HTTP 2.0 kinda fixed that problem and it's just a matter os implementation?

They are not even using full extent of OpenGL yet.

Because of Amdahl's law
Throwing moar cores doesn't make it faster if the task cannot be split in multiple parts.
Think of 9 women trying to make a baby in 1 month.

In what way do you believe HTTP 2.0 fixed sequential processing?

The problem is the way requests are made, not what protocol is being used. If your next request is dependent on the reply of your first request, there's not much you can do about it.

with more parallelism

HTTP 2.0 only allows multiplexing multiple streams over a single socket, people have been using multiple streams for a long time. As I said, the issue is at the application level.

>downloading and streaming
Yeah those are excluded naturally.
But interestingly there you could use compression techniques to speed up the downloads significantly.
Not a webdev but I imagine they didn't fix whatever you're saying because the problems exist.
If you load content through a script you'd have to have solved the halting problem to manage to correctly

Personally. If I made a customer focused browser I'd do speculative loads for content. No site owner would like you hammering their servers with potentially incorrect requests. There increased parallelism could be a great way to deal with processing that. You'd find every place for a request. Trace them backwards to some sort of constant (or otherwise complete value like a previous request). Build a dependency tree that lets me start to predict potential requests.
Really when you have a site that takes 1 second to load the amount of CPU/GPU power you have is ridiculous on just about any device. It may sound like a lot of work to put such a boundless task out there but it doesn't have to be a 100% solution.
And if we're not limited by bandwidth, just latency that's gonna help a lot. And since it's just a caching system you're not really slowing down the cases which aren't bad. Maybe you could have a centralized scoring system to find a way to avoid this for good sites you haven't even been to. But maybe that doesn't even matter? I'm not that familiar with site-loading JS use but my impression is that multi-threading is very limited. I wouldn't expect to find them using every hyperthread of every core. If they've gone that far to speed things up why would they put their requests out so slowly?

>But interestingly there you could use compression techniques to speed up the downloads significantly.
Yes, and this is frequently done. Gzip is frequently used for compressing HTTP request content, and most if not all video encoding formats rely on compression. For streaming, there's also quality adaption algorithms that adapt to the available bandwidth. See MPEG-DASH (youtube and netflix uses these)

I'm the first guy you're responding to, but I just wanted to say that when I used to work as a web dev we had a rule of thumb that we should minimize the number of roundtrips necessary for making a request and showing the results.

Rendering anything is inherently silly parallel.

Luckily, you don't need to program CUDA in order to render an image on the GPU though.

>Network is the problem
Nope.

Shit webdevs using increasingly more CPUtime for their bullshit is the problem.
Disable scrips and watch the web fly.

Sounds like a good idea. I disrespect webdevs as a group a lot but I recognize that you're not all bad and many of you are just put in a bad situation.
>gzip is used
I suspected something like that. I was mostly thinking of more aggressive parallel compression techniques due to the abundance of GPU resources. Of course since you're working with more limited information (as you do with parallelism) you'd lose a lot of the information that helps you compress (every bit is always a result of surrounding bits, if you limit the window as you do with parallel computing you loose something). But I think that perhaps you'd outweigh that with more processing power.
It was mainly aiming to fit GPU usage in here to please op.

>implying those scripts you disabled aren't to a large degree relying on network requests
CPU usage is not the problem.

>crypto miners, bad code that no one cares about, advertisements, etc, etc - nope the network is not the problem.