liori

joined 1 year ago
[–] liori@lemm.ee 1 points 4 months ago

Personally I think child processes are the right approach for this. Launch a new process* for each query and it can (if you choose to go that route) dynamically load in compiled code. Exit when you’re done, and the dynamically loaded code is gone. A side benefit of that is memory leaks are contained, since all memory you allocate is about to be removed anyway.

I'd probably be fine with hundreds or thousands of these hanging in memory. I suspect the generated code for a single query would be in hundreds of kilobytes, maybe a megabyte. But yeah, this is one of those technical details I'd worry about.

Honestly, I wonder if you could just use an actual HTTP server for this? They can handle hundreds or even thousands of simultaneous requests. They can handle requests that complete in a fraction of a millisecond or ones that run for several hours. And they have good tools to catch/deal with code that segfaults, hits an endless loop, attempts to allocate terabytes of swap, etc. HTTP also has wonderful tools to load balance across multiple servers if you do need to scale to massive numbers of requests.

Not sure how a HTTP server would solve the CPU bottleneck of scanning terabytes of data per query?

[–] liori@lemm.ee 2 points 4 months ago

I somehow didn't think a regular JIT solution might be applicable here, but it is. Thank you! There seems to be a number of projects doing JIT for C++, will look at them.

 

I'm working on a query engine, essentially a tool to scan/filter/annotate by lookups/group by/aggregate a large dataset, tens-of-terabytes range. The compute part seems to be a bottleneck for me (I'll be doing around 80-300 GB/s of reads, and yes, I will have hardware capable of providing that kind of throughput). My hypothesis is that by encoding query in form of template arguments I can make the compiler generate code optimized for a specific type of query (like, the filtering or aggregation keys). But I do not know what queries will users send, so I need a way to instantiate templates at runtime.

Sounds simple: for a new type of query invoke a compiler at runtime to build a dynamic library with a new instantiation, then dynload it and off we go. Some prior work is here, though I'm pretty sure any JIT compiler also can counts here. But there's enough technical details to worry about, and at the same time this idea isn't novel, so I wonder—are there any packaged solutions for this kind of approach?

[–] liori@lemm.ee 12 points 6 months ago (3 children)
[–] liori@lemm.ee 3 points 7 months ago

Plenty of them on various sites, like this one I found yesterday.

 

TL;DR: we have discovered XMPP (Jabber) instant messaging protocol encrypted TLS connection wiretapping (Man-in-the-Middle attack) of jabber.ru (aka xmpp.ru) service’s servers on Hetzner and Linode hosting providers in Germany. The attacker has issued several new TLS certificates using Let’s Encrypt service which were used to hijack encrypted STARTTLS connections on port 5222 using transparent MiTM proxy. The attack was discovered due to expiration of one of the MiTM certificates, which haven’t been reissued. There are no indications of the server breach or spoofing attacks on the network segment, quite the contrary: the traffic redirection has been configured on the hosting provider network. The wiretapping may have lasted for up to 6 months overall (90 days confirmed). We believe this is lawful interception Hetzner and Linode were forced to setup.

 

In short, by the exit polls:

  • PiS (the current ruling party, right-wing) got the most votes, but cannot rule alone, nor can rule in coalition with the alt-right party Konfederacja by a decent margin (212 mandates total vs. 231 needed to have majority).
  • The opposition parties (center-right Koalicja Obywatelska, center-right Trzecia Droga and leftist Lewica) together have majority. They know they have to make a government together, the question is whether they can overcome their differences. They did suggest strong coöperation during their campaigns.
  • Highest-ever turnout (72%) in the elections.
  • The accompanying referendum (a device to have more funds for promoting ideas by the current ruling party) a total failure (40% turnout—voters had to explicitly opt-out of participation!).
 

While high-frequency trading is not exactly my favourite topic, I do like reading on their technical approaches.

By Paul Bilokon, Burak Gunduz

This work aims to bridge the existing knowledge gap in the optimisation of latency-critical code, specifically focusing on high-frequency trading (HFT) systems. The research culminates in three main contributions: the creation of a Low-Latency Programming Repository, the optimisation of a market-neutral statistical arbitrage pairs trading strategy, and the implementation of the Disruptor pattern in C++. The repository serves as a practical guide and is enriched with rigorous statistical benchmarking, while the trading strategy optimisation led to substantial improvements in speed and profitability. The Disruptor pattern showcased significant performance enhancement over traditional queuing methods. Evaluation metrics include speed, cache utilisation, and statistical significance, among others. Techniques like Cache Warming and Constexpr showed the most significant gains in latency reduction. Future directions involve expanding the repository, testing the optimised trading algorithm in a live trading environment, and integrating the Disruptor pattern with the trading algorithm for comprehensive system benchmarking. The work is oriented towards academics and industry practitioners seeking to improve performance in latency-sensitive applications.

[–] liori@lemm.ee 2 points 10 months ago* (last edited 10 months ago) (1 children)

Try dmraid, it's been designed to take over various formats of hardware RAID cards.

[–] liori@lemm.ee 1 points 10 months ago

This plea for help is specifically for non-coding, but still deeply technical work.

 

I've said this previously, and I'll say it again: we're severely under-resourced. Not just XFS, the whole fsdevel community. As a developer and later a maintainer, I've learnt the hard way that there is a very large amount of non-coding work is necessary to build a good filesystem. There's enough not-really-coding work for several people. Instead, we lean hard on maintainers to do all that work. That might've worked acceptably for the first 20 years, but it doesn't now.

[…]

Dave and I are both burned out. I'm not sure Dave ever got past the 2017 burnout that lead to his resignation. Remarkably, he's still around. Is this (extended burnout) where I want to be in 2024? 2030? Hell no.

[–] liori@lemm.ee 1 points 10 months ago (1 children)

I'm pretty sure just like transport containers were standardized by ISO to make transport easier, game boxes should be standardized to fit in Kallax.

[–] liori@lemm.ee 4 points 11 months ago (1 children)

A lack of planning on your part doesn’t constitute an emergency on mine.

Though I kind of think Japanese grammar cannot express this thought and the closest you can get is Ganbatte!

[–] liori@lemm.ee 11 points 11 months ago (3 children)

Good question! I quickly found this table, though this is yearly statistics only: https://www150.statcan.gc.ca/t1/tbl1/en/tv.action?pid=3510019201

[–] liori@lemm.ee 1 points 11 months ago* (last edited 11 months ago)

Yep, it's EU. File transfer shouldn't be bad if your files are large, though it's best if you tested it first—it might depend on your ISP's peering and your prefered transfer protocols/tooling. Whether it's reputable for your purpose, you probably have to do your own research. Also, remember that the offer I mentioned would only be equivalent in durability to a single-box RAID5 for your purposes, so not exactly equivalent to Google's.

[–] liori@lemm.ee 2 points 11 months ago (2 children)

There's Jottacloud with unlimited storage for 10 EUR/month, but they gradually slow down after first 5 TB. 30 TB might be a bit too much. There's Hetzner with their dedicated 4×10TB machines for ~52 EUR, you could do RAID5 and have somewhat redundant 30 TB, at the cost of self-managing a dedicated machine. There are several providers doing regular S3 (which you can take advantage of with tools like rclone) with decent redundancy for 4-5 USD/TB + egress. For high-value data you should be probably spending more than 100 USD/month for 30TB in the cloud, or invest in actual hardware. Do you need hot access to this dataset, or is a cold storage archive enough?

[–] liori@lemm.ee 1 points 1 year ago (1 children)

Will they keep the dense email list view as an option? Seeing more than the 14 email messages visible on the screenshot in the post is useful to sort out large folders.

view more: next ›