Debunked: Delta Lambda Compression

Mick West

Administrator
Staff member
Metabunk 2018-07-13 22-54-45.jpg

This Indiegogo campaign is so obviously impossible that I suspect it might be some kind of elaborate educational hoax teaching people not to trust Indiegogo campaigns. But in case it's not, spoiler, it doesn't work.

Delta Lambda claims to to compress your existing data on you phone up to 1000 times by using special algorithms.
Metabunk 2018-07-13 22-57-26.jpg

Unfortunately this is mathematically impossible. Data has a limit in how much it can be compressed. The limit is based on the "entropy" of the data (a measure of how much it resembles random data). The vast majority of data on your hard drive (photos, videos, audio, apps) is already compressed by advanced algorithms that approach the theoretical limits. In addition they are compressed with lossy compression, meaning they lose data when compressed. Even pushing closer to theoretical limits, and even with a lossy algorithm, you'd be hard pressed to get 10% more space from you hard drive, let alone 1000x with a LOSSLESS algorith. Hence this chart is a joke:

Metabunk 2018-07-13 23-04-17.jpg

So what's going on here? Scam? Hoax? It seems obvious that loads of computer scientists would immediately explain why it won't work. So it's either a hoax, or a scam aimed at people disconnected from reality.

The problem is that the precise limit on how much something can be compressed without loss is an unsolved problem (Kolmogorov complexity), so it's possible that you could use this to argue that you've found a really really good solution. Information theory in general is also complicated, so possibly people who don't trust scientists might fall for it.

A simple test here is if the algorithm has won any of the data compression prizes out there. There's a few, and their goals are much lower than what Delta Lambda can actually do. Here one that should get them 50,000 euros.
https://en.wikipedia.org/wiki/Hutter_Prize
The Hutter Prize is a cash prize funded by Marcus Hutter which rewards data compression improvements on a specific 100 MB English text file. Specifically, the prize awards 500 euros for each one percent improvement (with 50,000 euros total funding)
Content from External Source
The current record is 1/16th the size. DL claims 1/1000th. Since they have not stepped forward to claim their prize, this seems pretty obvious that it does not work.
 
Last edited:
They went a bit too far here:
http://dlcs.tech/whitepaper.pdf
Blockchain and Digital Ledgers are also prolific and emerging data consumers. Few noticed Bitcoin when it was introduced in 2009. Even less would suspect that in just ten years it would shake the financial world to its core, spawning a cryptocurrency industry whose valuation is rapidly approaching one TRILLION Dollars. However, steadily expanding numbers of users, and small block sizes can cause significant delays in blockchain transactions. Delta Lambda can eliminate this bottleneck by shrinking a Gigabyte of transactions into a one Megabyte block, increasing the speed, security, and scalability of the blockchain. Enhanced Viability. Reactive Currency - Smaller Blocks. Bigger Profit.
Content from External Source
The bitcoin blockchain is not suitable for compression because it contains mostly random keys and checksums, which provably cannot be compressed. There some other data which can, but even a 1/2 size compression is highly implausible. 1/1000th is literally impossible.
 
Isn't this the storyline of the comedy show Silicon Valley?
Essentially yes, which (assuming it's a scam) might be something that they can point to to make people believe it. "Look, it worked on HBO!"

However the DL compression claims are at least 100x as large as the Pied Piper algorithm, which does not actually work.

The "Weissman score", which was invented for the show, does actually work though, and you can use it as a measure of real and imaginary schemes.
https://en.wikipedia.org/wiki/Weissman_score

The Weissman score is an efficiency metric for lossless compression applications, which was developed for fictional use. It compares both required time and compression ratio of measured applications, with those of a de facto standard according to the data type. It was developed by Tsachy Weissman, a professor at Stanford, and Vinith Misra, a graduate student, at the request of producers for HBO's television series Silicon Valley, about a fictional tech start-up.[1][2][3][4]

The formula is the following; where r is the compression ratio, T is the time required to compress, the overlined ones are the same metrics for a standard compressor, and alpha is a scaling constant.[1]

Metabunk 2018-07-14 00-43-07.jpg
Content from External Source
The imaginary Pied Piper had an essentially impossible score of 5.2, the real Hutter Prize algorithms tops out at about 1.1 (with a compression ration of 6). The DL score would be around 200 with a compression ratio of 1000
 
Last edited:
It reminds me of the 'Sloot Encoding System'. Dutch inventor Jan Sloot claimed around 1995 that he could compress full-length movies into just a few kilobytes. He had an apparatus which gave the impression that it was able to show several movies that were compressed on a memory card, which at that time didn't have a large capacity.
Just when some big investors (Roel Pieper, former CTO of Philips was one of them) were ready to invest millions of dollars in a company around this technology, Sloot died of a heart attack (in 1999). He took the secret behind his technology to his grave ...

See https://en.wikipedia.org/wiki/Jan_Sloot
 
Lossless data compression algorithms cannot guarantee compression for all input data sets. In other words, for any lossless data compression algorithm, there will be an input data set that does not get smaller when processed by the algorithm, and for any lossless data compression algorithm that makes at least one file smaller, there will be at least one file that it makes larger. This is easily proven with elementary mathematics using a counting argument
Content from External Source
wikipedia/Lossless_compression#Limitations
 
It reminds me of the 'Sloot Encoding System'. Dutch inventor Jan Sloot claimed around 1995 that he could compress full-length movies into just a few kilobytes. He had an apparatus which gave the impression that it was able to show several movies that were compressed on a memory card, which at that time didn't have a large capacity.
Just when some big investors (Roel Pieper, former CTO of Philips was one of them) were ready to invest millions of dollars in a company around this technology, Sloot died of a heart attack (in 1999). He took the secret behind his technology to his grave ...

See https://en.wikipedia.org/wiki/Jan_Sloot


I saw a documentary about this one a while ago. It is to fascinating that Jan could fool Philips! He demo'ed the box for them, but did not allow the engineers from Philips to look inside the box. Wonder why? ;)
 
Just when some big investors (Roel Pieper, former CTO of Philips was one of them) were ready to invest millions of dollars in a company around this technology, Sloot died of a heart attack (in 1999). He took the secret behind his technology to his grave ...

This is reminiscent of the story behind several other scams like free (or nearly free) energy. Somone claims to invent something. Then they don't produce it (or they eventually die, as everyone does) and people claim that there was a big cover-up and sometimes that the the person was assassinated.

I saw this first shared on an esoteric ET/UFO group on Facebook.
Metabunk 2018-07-14 08-27-17.jpg

The guys in the video are the "Executive team", and are both mostly active in UFOology (and esoteric related topics like alien abductions, corpses, underground bases, etc).

Metabunk 2018-07-14 08-29-20.jpg

It's possible, of course, that Barry and Steve are themselves being scammed. At the risk of being slightly indelicate, people in the UFO community have a very hopeful disposition - with a desire to believe in amazing things perhaps overriding the advice of "experts", who they feel have been wrong so many times before. Here's a list of the groups that Steve Mera shared the video to:

  1. AHK42 - A Hitchikkers Guide to 42 Talk Show Group.
  2. Alien Autopsy Analysis.
  3. Alien Conspiracy.
  4. ASSAP: the Association for the Scientific Study of Anomalous Phenomena.
  5. BARGAIN HUNTERS NEW, SALFORD, MANCHESTER, BOLTON.
  6. Bookjunkies.
  7. ET/UFO HUNTERS.
  8. Exopolitics Hong Kong.
  9. Exopolitics UK.
  10. Friends of Probe Blackpool @ St Anne's.
  11. Friends who like UFOlogy.uk.
  12. Haunted History.
  13. LookUpTV.
  14. MAPIT.
  15. Nature Of Reality Radio.
  16. North Cheshire Paranormal.
  17. OTHER WORLD GLOBAL NETWORK.
  18. PA, WV, & DE MUFON.
  19. Pacific Paranormal.
  20. Paranormal Buy & Sell Worldwide.
  21. Paranormal Events Promotion.
  22. Paranormal UK.
  23. Phenomena Mag. en Español - Paranormal, OVNIs, Misterios- Grupo de opinión.
  24. Phenomena Magazine.
  25. Pure Paranormal.
  26. Richplanet.net Appreciation Society.
  27. Somewhere in the Skies Podcast.
  28. Space and Universe.
  29. The Paranormal Symposium.
  30. The Phenomena World Association.
  31. The Scientific Establishment of Parapsychology.
  32. The UFO Trust!
  33. Truth Juice Birmingham.
  34. UFO Disclosure Vault.
  35. UFO Unplugged.
  36. UFO, Crop Circle & High Strangeness Event Research Trip - Wiltshire UK.
  37. UFOLOGY INTERNATIONAL.
  38. UFOSECRETSPACE.
  39. World UFO.
  40. ‎ملتقى نجوم الصحافة التونسية‎.
Content from External Source
Presumably just the groups he's a member of - but it's a rather limited audience.
 
The whole thing is rather odd. It's got a very limited audience - seemingly mostly people in these paranormal/UFO type groups. Then I saw someone tweeting about it, a Walter Baltzey:

https://twitter.com/BaltzleyWalter
Metabunk 2018-07-15 12-48-38.jpg


His twitter feed looks like a caricature of a Trump supporter, almost Russian troll-like, with references to the QAnon conspiracy theory.

He claims to be a co-creator of DL, but I could not find him, but then realized he's on there as "Walter Ignatius" with different hair.
Metabunk 2018-07-15 12-50-23.jpg

https://www.linkedin.com/in/walter-baltzley-77603126
Metabunk 2018-07-15 12-51-41.jpg

The company "Gerwig Baltzley Consulting LLC" is a one month old joint venture with Nathan Gerwig:
https://www.linkedin.com/in/nathan-gerwig-384bb853
Metabunk 2018-07-15 12-54-00.jpg

Who is on DL as "Nathan Andrew"
Metabunk 2018-07-15 12-55-02.jpg
 
Back to the more tangible technical impossibilities:

Metabunk 2018-07-15 13-01-16.jpg

There's so much wrong here it's hard to know where to start.

1TB is 1024GB, yet the claim compressing 1TB on a single core processor only takes 100x as long as compressing 1GB.

1GB compressed single core @2.6 GHz takes 0.95 seconds. With 10 cores at 3.0Ghz they claim it takes 0.033 seconds, which is 28 times as fast, despite having only a theoretical increase in processing power of 11.5 (assuming perfect parallelization of the algorithm).

1GB in their claimed 0.033 seconds is a bandwidth of 30 GB per second. The world's fastest SSD drive tops out at 6.8GB/s. So they are claiming a speed that is five times faster than you can possibly even just read the data off the drive. And that's with a fancy advanced datacenter drive. Consumer SATA3 high end drives top out at 0.5GB/s. Actual real-world data compression, like H.264 video compression, is MUCH slower than this. (I've got an 8 core 3.2 Ghz iMac Pro, and it does around 0.05GB/s compressing raw video from memory) and again, they claim to compress movies which are ALREADY COMPRESSED).

So it's all obviously not going to work. The question then becomes, what's going on? Who is being scammed?
 
I’ve only signed up to this website after seeing delta lambda promoting its self begging for money for something that if it worked would be a multi-billion pound product. I voiced my concerns and have been told to speak to their technical team on why it’s not a steaming pile of chip. I was wondering would you do it on my behalf or on your own and share your findings?

You have a much better understanding of compression than my very basic understanding. I know enough that the explanation they give sounds like crap.

They gave the linked in pages and said they would be happy to answer questions.

https://www.linkedin.com/in/walter-baltzley-77603126/

Thank you
 
Last edited by a moderator:
I voiced my concerns and have been told to speak to their technical team on why it’s not a steaming pile of chip. I was wondering would you do it on my behalf or on your own and share your findings?
I'm quite sure their technical team is fully aware that it does not work - simply because it would fail to compress things by the suggested amounts.
 
I just realized the Steve Mera is the same person who took the photos of the balloon that were used by Tom DeLonge's enterprise to illustrate their initial announcement.
 
Why 1000x compression is "impossible".

Of course 1000 times compression is possible for some files.

Let P and Q be any two 1000-bit files/bit-strings

The following compression/decompression algorithm works with all files/bit-strings and achieves 1000x compression of both P and Q.

Code:
compress(bits) {
  if bits = P return 0
  if bits = Q return 1;
  if bits = 0 return P;
  if bits = 1 return Q;
  return bits;
}

decompress(bits) {
  if bits = 0 return P;
  if bits = 1 return Q;
  if bits = P return 0;
  if bits = Q return 1;
  return bits;
}

But, like all compression algorithms, the vast majority of files achieve only 1x compression (try your favourite compression algorithm on a random binary file), and for every file that gets smaller there is another that gets bigger (try compress a compressed file).

Further, for all compression algorithms, the greater the compression, the exponentially fewer the files that achieve that level of compression.

Simply put, there are exponentially more (different) big files than there are (different) small files. It's just counting.

Consider any compression algorithm.

  • How many different files can achieve 1000x compression?

Let's answer a simpler question.

  • How many files of size 1000 bits or less can achieve 1000x compression? There are 2¹⁰⁰¹-1≈ 10³⁰¹distinct such files. Note that 1000 bits is about 125 SMS characters and about 62 WhatsApp characters, so plenty such files lying around your phone for magic compression to the cloud.

Well one 1000-bit file can compress to the 1-bit file 0, and another can compress to the 1-bit file 1. And that is it*.

  • So at most 2 of the 2×10³⁰¹ possible files of length 1000 bits or less can achieve 1000x compression.

That is, only 10⁻³⁰¹ percent of these files can achieve a 1000x compression level**.


*We could do slightly better by compressing one other file to the empty file, giving 3 instead of 2, but then the compression of the empty file is non-empty. Compressing anything to the empty file achieves infinite compression, but then the compression of the empty file must be non-empty, i.e., infinite expansion. Consequently, compression functions tend to compresses the empty file to itself.
**Four more files can compress to 2-bit files, but that is only 500x, eight more files can compress to 3-bit files, but that only 333x, etc. The counting theorem cannot be beat.
 
Last edited:
[... minor edit ...]

  • So at most 2 of the 10³⁰¹ possible files of length 1000 bits or less can achieve 1000x compression.

That is, only about 10⁻²⁹⁸ percent of these files can achieve a 1000x compression level.
 
Back
Top