Q1: What is the model/type of your monitor and would you recommend it?
Q2: Does your monitor support native PIP/PBP (and convenient on the fly switching) without software?
Q3: Does The PIP/PBP function of your monitor require 2 sources? Can this be the same source?
Q+: For the advanced users, can i control PIP/PBP with tools like these?
linux tool / Windows tool
Context: I have been a dual monitor enthusiast ever since i found an old unused CRT monitor on my parents attic. This has since been a core part of how i use computers. Currently i am rocking a standard 1440p main monitor and a vertical (16:10 FTW) 1050p one. My reliance on needing more screen space then my peers goes as far as my job having to purchase me an extra monitor as part of my disability package.
I have seen ultra wide monitors IRL and i absolutely love them, but for a matter of fact buying one means no more space for any others. That's why i am so interested on the PIP feature but stores rarely ever mention them.
I know that for 90% i wont even need to use that feature, but if i play a video full-screen, or a game (some really do not like windowed mode) and i cant use my virtual buttons/display features/something completely different on the side i am gonna regret my decision big time.
Thanks in advance for your answers.
Well there are 2 things.
First there is speed for which they do indeed rely on multiple thousands of super high end industrial Nvidia gpus. And since the 10Billion investment from microsoft they likely expanded that capacity. I’ve read somewhere that chatgpt costs about 700,000 a day to keep running.
There are a few others tricks and caveats here though. Like decreasing the quality of the output when there is high load.
For that quality of output they do deserve a lot of credit cause they train the models really well and continuously manage to improve their systems to create even higher qualitive and creative outputs.
I dont think gpt4 is the biggest model that is out there but it does appear to be the best that is available.
I can run a small llm at home that is much much faster then chatgpt.. that is if i want to generate some unintelligent nonsense.
Likewise there might be a way to redesign gpt-4 to run on consumer graphics card with high quality output… if you don’t mind waiting a week for a single character to be generated.
I actually think some of the open sourced local runnable llms like llama, vicuna and orca are much more impressive if you judge them on quality vs power requirement.