With all the jibber-jabber about Starlink being down, I figured it was an appropriate time to remind people this exists. Vint Cerf, one of the founding wizzards of the internet, established the IPN SIG in 1998 to cuss and discuss issues related to IP protocols over high-latency, potentially high-loss links. Worth poking around if you've not seen it before, though I sort of wish there were more use cases regarding information security.
I used to work with some of those board members at JPL!
DTN is cool stuff. We had a few applications built up for distributed "delay aware" computing so that you could, at the network/application boundary, farm out jobs for e.g., an orbiting compute cluster coming over the horizon.
It's also noteworthy that DTN can be used on Earth too, especially in remote places with poor/unreliable data connections. There's some interesting literature about those applications, which was my first approach to DTN when I started working with it.
I'm not sure how seriously I take an organisation supposedly focused on interplanetary space where the main advertised event seems to be Raspberry Pi workshops.
There are many conferences and academic discussions that spend a long time bikeshedding while industry actually does stuff.
The lack of involvement from industry in a field where stuff is happening suggests to me this is one of them.
Most of what they do is Layer 2 and above, so it's hardware agnostic - prototyping on a Pi is fine.
Their work is gaining traction. DTN Bundle Protocol has been baselined for the LunaNet specification, which a bunch of private companies are designing to for lunar relay networks. Bundle Protocol is also currently on the CCSDS standards track so it should be formally part of the CCSDS protocol suite soon.
For those unaware: CCSDS is the Consultative Committee for Space Data Systems, they set widely used standards for spacecraft communications protocols. Basically anything beyond Earth orbit flies some variant of a CCSDS protocol stack, and a substantial chunk of missions in Earth orbit do as well, particularly if they are government funded. It's an international effort, China and Russia participate too so that everyone can communicate if need be.
It's from my 7th grade history teacher, Mr. Mooneyham. As in "tomorrow we're going to cuss and discuss the Louisiana Purchase. Make sure you read chapter 12." He was also the teacher who had the "Super-Duper Discussion Stick" which he used to hit your desk if you fell asleep in class. And at least once he played the version of the "Devil Went Down to Georgia" w/ the bad words left in.
In the old days, public schools in suburban Texas were quirky, but the quality of education was relatively decent. For instance, I remember that Thomas Jefferson was president in 1803 when the Louisiana Purchase was finalized.
IMO the most likely solution to interplanetary networking is to throw tons of datacenter and compute that's anywhere more than a few light-seconds from the nearest existing datacenter, then use something along the lines of IPFS to perform data synchronization between planets.
Despite the name, IPFS has no properties that make it suitable for this application. It’s very bandwidth intensive and isn’t designed with latency or disruption tolerance in mind.
there's a lot of interesting problems just in the networking.
if it took four years for a message to cross the void from where you are to the recipient, you certainly wouldn't want to wait a full eight years to see they didn't send a receipt message and only then retransmit.
eight years is some awful latency.
you'd probably want to send each message at something like a fibonacci over the months. so, gaps of (1, 1, 2, 3, 5, 8, etc) would mean sending the message on months (1, 2, 4, 7, 12, 20, 33, etc) until you got a confirmation message that they had received it. they would similarly want to send confirmations in the same sort of pattern until they stopped receiving copies of that message.
spreading the resends out over time would ensure not all of your bandwidth was going to retransmissions. you'd want that higher number of initial transmissions in hopes that enough of the message makes it across the void that they would have started sending receipts reasonably close to the four years the initial message would take to get there.
if you had the equivalent of a galactic fido-net system, it could be decades and lifetimes between messages sent to distant stars and messages sent back.
that would probably depend on how much power it takes to send the messages, how much actual usable bandwidth you could manage over the distances involved, and how much data you want to send.
if it takes a large amount of energy to send the data, we probably wouldn't want to run the equipment all the time. strong pulses would let the equipment cool down or recharge capacitor banks or whatever during downtime.
interstellar dust and other debris floating through space could cause interference, not to mention radiation from everything else around us, and our own sun shining right next to our little laser.
might want to move the laser out onto pluto or something to avoid having it right up against the sun.
It would be a lot more efficient to use erasure coding + heavy interleaving with other traffic so that you can withstand a maximum predicted outage period.
and you'd probably want to take orbits/vectors into account, a djikstra-esque algorithm where the distances change is crazy.
Also, our signals are usually going very short distances very quickly and are very protected from solar/cosmic waves by the ionosphere. What kind of data loss could you get transmitting in open space across vast distances and time?
Interstellar space is pretty empty, and we have good models for it thanks to the radio astronomy community. Dispersion is low enough to be nearly negligible, even over tens of light years.
Determining theoretical interstellar link rates is a fairly straightforward link budgeting exercise, easier in fact than most terrestrial link calculations because you don't have multipath to worry about.
I agree! This was my obsession when I worked at JPL, unfortunately the answer was usually "no mission will sacrifice their budget for reusable assets".
You'd need a mission whose purpose is to emplace compute stations.
you'd probably want a different protocol than IPFS for that application. managing a DHT with extremely high latency isn't going to work very well. something like named-data networking would probably work better since the transmitter can know
1. exactly what prefixes need to be buffered based on the received interest messages from deep space
2. exactly which data rate is possible at any given time
3. exactly how much data needs to be sent from the buffer in each transmission
optimizing for high latency really pushes your design choices around compared to our comparatively very low latency uses here on earth. its pretty interesting to think about.
We already have an interplanetary internet called the NASA Deep Space Network. Understanding it's limitations and challenges is a good way to start thinking about this.
Nah, nothing that extreme. The broadcast range and bandwidth of even current technology in space could handle a huge amount of fairly rapid data transfer between the two planets.
It would be more like a handful of satellites, some orbiting earth, some orbiting mars, and then a handful of relay satellites serving as intermediaries.
Don't count on playing e-sports competitively, though.
The lag under ideal conditions would be insane, about 2.5 minutes each way (when the planets are "only" 40 million kilometers apart), but with repeaters and overhead probably closer to twice that.
But carpeting that distance across the entire volume of space between the planets with data centers every few light-seconds apart seems ambitious. A hundred or more data centers in space?
> throw tons of datacenter and compute that's anywhere more than a few light-seconds from the nearest existing datacenter
It's like the transatlantic internet cable. One really beefy interconnect is more than enough for two halves of the planet to talk.
We wouldn't need to blanket the solar system in data centers to be able to communicate with other planets. We would only need enough connections so that no matter where in their respective orbits they are, there is a line of radio "sight" that is clear enough for high bandwidth communications to work.
I don't have access to the specifics, but I imagine something between 5 and 10 satellite data centers orbiting the sun in between earth and mars would be enough to maintain communications with minimum delay regardless of when in the solar year the comms take place.
At their maximum separation, Mars & Earth are about 20 minutes apart. If we had 10 satellite data centers all in perfect alignment (disregarding the sun, which obviously makes a hash out of things) they'd still each be 2 minutes apart.
Once you take into consideration the sun, plus the fact that the you'd need to cover the full disk to keep all data centers within a few minutes of another one in an unbroken chain back to both planets, I just don't get the math involved here.
But, I'm also terrible at both math and visualization, so I readily concede I may be missing something obvious.
When Mars and the Earth are on opposite sides of the sun, a satellite ring can transmit around the sun and keep the communication lines open.
Having a ring of relay satellites gives you a set distance to transmit from Mars. The satellites can then transmit their received data from the one that is closest to Mars to the one that is closest to Earth, which would then send the data to Earth.
This is helpful for a variety of reasons, but the most important one is that with this setup, even when the Sun is in between Earth and Mars, you could still send data around the sun.
Constant communication, no communications breakdowns. Even if 1 satellite failed for some reason, a bit of maneuvering would allow the others to backfill the gap until it could be repaired or replaced.
Even when Earth and Mars are close together, it would still be smart to use the relay so that the power levels are easily calculated and maintained.
Author of the initial versions of DTNPerf (iperf for DTNs) and some related papers. I moved on to other areas of SW engineering, but glad to know DTN technology is still looked after. I recently learned that ESA are looking into that as well.
Well that's just it, my understanding is that FTL hasn't been proven to violate causality, or that causality is inviolable. It's just very strongly hinted at.
In special relativity at least it's pretty clearly the case that communication outside the light cone (so faster than light) will result in events happening in the wrong order in some frames, violating causality. I will not speak of general relativity, as while I've taken a course in it, years later I have returned to considering it largely dark magic.
Supposing you transmit a message to me at a prearranged time, a number. At that prearranged time I pick a number at random, and act as if it is your message.
When I eventually get your message some time later, if it turns out my random pick was wrong, I kill myself. If the many worlds interpretation is right, I should only observe universes in which I’be managed to conjure up your message faster than causality, right?
> If the many worlds interpretation is right, I should only observe universes in which I’be managed to conjure up your message faster than causality, right?
I feel that's pairing MWI with some non-physical (or at least beyond the wave function) overarching "I" that can see across or jump between branches of the wave function, whereas I'd claim the appeal of embracing MWI is largely that the universe's wave function is all there is and observers/consciousness play no special role (along with not having nonlocal random "collapses"). The experiment would be no different than gathering a bunch of people, assigning each a number, then killing the ones that were assigned the wrong number once the real number arrives.
"We work to extend terrestrial networking into solar system space..."
Minor nitpick: it's the Solar System - i.e. capitalized (since it's a proper name). The Solar System is the planetary system that we reside in, the one that has the star Sol at its center.
> When not used as a proper noun and written without capitalization, "solar system" may refer to either the Solar System itself or any system reminiscent of the Solar System.[14]
With all the jibber-jabber about Starlink being down, I figured it was an appropriate time to remind people this exists. Vint Cerf, one of the founding wizzards of the internet, established the IPN SIG in 1998 to cuss and discuss issues related to IP protocols over high-latency, potentially high-loss links. Worth poking around if you've not seen it before, though I sort of wish there were more use cases regarding information security.
I used to work with some of those board members at JPL!
DTN is cool stuff. We had a few applications built up for distributed "delay aware" computing so that you could, at the network/application boundary, farm out jobs for e.g., an orbiting compute cluster coming over the horizon.
Really fun times.
And there are lots of open implementations to play with!
https://github.com/nasa/HDTN
https://github.com/nasa-jpl/ION-DTN
https://gitlab.com/d3tn/ud3tn
https://upcn.eu/
It's also noteworthy that DTN can be used on Earth too, especially in remote places with poor/unreliable data connections. There's some interesting literature about those applications, which was my first approach to DTN when I started working with it.
I'm not sure how seriously I take an organisation supposedly focused on interplanetary space where the main advertised event seems to be Raspberry Pi workshops.
There are many conferences and academic discussions that spend a long time bikeshedding while industry actually does stuff.
The lack of involvement from industry in a field where stuff is happening suggests to me this is one of them.
Most of what they do is Layer 2 and above, so it's hardware agnostic - prototyping on a Pi is fine.
Their work is gaining traction. DTN Bundle Protocol has been baselined for the LunaNet specification, which a bunch of private companies are designing to for lunar relay networks. Bundle Protocol is also currently on the CCSDS standards track so it should be formally part of the CCSDS protocol suite soon.
For those unaware: CCSDS is the Consultative Committee for Space Data Systems, they set widely used standards for spacecraft communications protocols. Basically anything beyond Earth orbit flies some variant of a CCSDS protocol stack, and a substantial chunk of missions in Earth orbit do as well, particularly if they are government funded. It's an international effort, China and Russia participate too so that everyone can communicate if need be.
> The lack of involvement from industry in a field where stuff is happening suggests to me this is one of them.
Remind me again, which companies are going inter-planetary?
Off topic, but...
> to cuss and discuss
...is a turn of phrase that's new to me and I love it. Totally stealing that.
It's from my 7th grade history teacher, Mr. Mooneyham. As in "tomorrow we're going to cuss and discuss the Louisiana Purchase. Make sure you read chapter 12." He was also the teacher who had the "Super-Duper Discussion Stick" which he used to hit your desk if you fell asleep in class. And at least once he played the version of the "Devil Went Down to Georgia" w/ the bad words left in.
In the old days, public schools in suburban Texas were quirky, but the quality of education was relatively decent. For instance, I remember that Thomas Jefferson was president in 1803 when the Louisiana Purchase was finalized.
IMO the most likely solution to interplanetary networking is to throw tons of datacenter and compute that's anywhere more than a few light-seconds from the nearest existing datacenter, then use something along the lines of IPFS to perform data synchronization between planets.
Despite the name, IPFS has no properties that make it suitable for this application. It’s very bandwidth intensive and isn’t designed with latency or disruption tolerance in mind.
there's a lot of interesting problems just in the networking.
if it took four years for a message to cross the void from where you are to the recipient, you certainly wouldn't want to wait a full eight years to see they didn't send a receipt message and only then retransmit.
eight years is some awful latency.
you'd probably want to send each message at something like a fibonacci over the months. so, gaps of (1, 1, 2, 3, 5, 8, etc) would mean sending the message on months (1, 2, 4, 7, 12, 20, 33, etc) until you got a confirmation message that they had received it. they would similarly want to send confirmations in the same sort of pattern until they stopped receiving copies of that message.
spreading the resends out over time would ensure not all of your bandwidth was going to retransmissions. you'd want that higher number of initial transmissions in hopes that enough of the message makes it across the void that they would have started sending receipts reasonably close to the four years the initial message would take to get there.
if you had the equivalent of a galactic fido-net system, it could be decades and lifetimes between messages sent to distant stars and messages sent back.
Wouldn't you want to completely saturate your bandwidth? Just always be transmitting whatever message has been transmitted the least.
that would probably depend on how much power it takes to send the messages, how much actual usable bandwidth you could manage over the distances involved, and how much data you want to send.
if it takes a large amount of energy to send the data, we probably wouldn't want to run the equipment all the time. strong pulses would let the equipment cool down or recharge capacitor banks or whatever during downtime.
interstellar dust and other debris floating through space could cause interference, not to mention radiation from everything else around us, and our own sun shining right next to our little laser.
might want to move the laser out onto pluto or something to avoid having it right up against the sun.
You'd want to do a lot of work with erasure codes as well.
It would be a lot more efficient to use erasure coding + heavy interleaving with other traffic so that you can withstand a maximum predicted outage period.
and you'd probably want to take orbits/vectors into account, a djikstra-esque algorithm where the distances change is crazy.
Also, our signals are usually going very short distances very quickly and are very protected from solar/cosmic waves by the ionosphere. What kind of data loss could you get transmitting in open space across vast distances and time?
Interstellar space is pretty empty, and we have good models for it thanks to the radio astronomy community. Dispersion is low enough to be nearly negligible, even over tens of light years.
Determining theoretical interstellar link rates is a fairly straightforward link budgeting exercise, easier in fact than most terrestrial link calculations because you don't have multipath to worry about.
I agree! This was my obsession when I worked at JPL, unfortunately the answer was usually "no mission will sacrifice their budget for reusable assets".
You'd need a mission whose purpose is to emplace compute stations.
That's why we can't have nice things.
you'd probably want a different protocol than IPFS for that application. managing a DHT with extremely high latency isn't going to work very well. something like named-data networking would probably work better since the transmitter can know
1. exactly what prefixes need to be buffered based on the received interest messages from deep space 2. exactly which data rate is possible at any given time 3. exactly how much data needs to be sent from the buffer in each transmission
optimizing for high latency really pushes your design choices around compared to our comparatively very low latency uses here on earth. its pretty interesting to think about.
How would that work to, say, Mars? Have satellites filling many, many orbits between the two planets?
We already have an interplanetary internet called the NASA Deep Space Network. Understanding it's limitations and challenges is a good way to start thinking about this.
Nah, nothing that extreme. The broadcast range and bandwidth of even current technology in space could handle a huge amount of fairly rapid data transfer between the two planets.
It would be more like a handful of satellites, some orbiting earth, some orbiting mars, and then a handful of relay satellites serving as intermediaries.
Don't count on playing e-sports competitively, though.
The lag under ideal conditions would be insane, about 2.5 minutes each way (when the planets are "only" 40 million kilometers apart), but with repeaters and overhead probably closer to twice that.
The comment was a few light-seconds. That's a lot of hops to Mars to fill to sustain that coverage year-round.
The distance between earth and mars varies between 150 and 2000 light seconds.
But carpeting that distance across the entire volume of space between the planets with data centers every few light-seconds apart seems ambitious. A hundred or more data centers in space?
> throw tons of datacenter and compute that's anywhere more than a few light-seconds from the nearest existing datacenter
I think I'm misinterpreting the comment.
It's like the transatlantic internet cable. One really beefy interconnect is more than enough for two halves of the planet to talk.
We wouldn't need to blanket the solar system in data centers to be able to communicate with other planets. We would only need enough connections so that no matter where in their respective orbits they are, there is a line of radio "sight" that is clear enough for high bandwidth communications to work.
I don't have access to the specifics, but I imagine something between 5 and 10 satellite data centers orbiting the sun in between earth and mars would be enough to maintain communications with minimum delay regardless of when in the solar year the comms take place.
At their maximum separation, Mars & Earth are about 20 minutes apart. If we had 10 satellite data centers all in perfect alignment (disregarding the sun, which obviously makes a hash out of things) they'd still each be 2 minutes apart.
Once you take into consideration the sun, plus the fact that the you'd need to cover the full disk to keep all data centers within a few minutes of another one in an unbroken chain back to both planets, I just don't get the math involved here.
But, I'm also terrible at both math and visualization, so I readily concede I may be missing something obvious.
Think of it more like 3 circles.
The inner circle has Earth's orbit in it. The outer circle is Mar's orbit.
The middle circle would be a ring of relatively stationary satellites in between them.
And in the center of all 3 circles is the Sun, which will not allow radio signals to pass through.
I drew a crappy illustration to demonstrate: https://ibb.co/tP2rkzS0
When Mars and the Earth are on opposite sides of the sun, a satellite ring can transmit around the sun and keep the communication lines open.
Having a ring of relay satellites gives you a set distance to transmit from Mars. The satellites can then transmit their received data from the one that is closest to Mars to the one that is closest to Earth, which would then send the data to Earth.
This is helpful for a variety of reasons, but the most important one is that with this setup, even when the Sun is in between Earth and Mars, you could still send data around the sun.
Constant communication, no communications breakdowns. Even if 1 satellite failed for some reason, a bit of maneuvering would allow the others to backfill the gap until it could be repaired or replaced.
Even when Earth and Mars are close together, it would still be smart to use the relay so that the power levels are easily calculated and maintained.
[dead]
Author of the initial versions of DTNPerf (iperf for DTNs) and some related papers. I moved on to other areas of SW engineering, but glad to know DTN technology is still looked after. I recently learned that ESA are looking into that as well.
There are too many graphics (>0) and not enough monospaced font for me to take this seriously.
https://www.rfc-editor.org/rfc/rfc1.txt
Steve Crocker, Vint Cerf, Jon Postel (RFC editor) and I all worked together at UCLA. I was there the day the IMP arrived. Heady days.
Could quantum entanglement eliminate the delay?
Nope: https://en.m.wikipedia.org/wiki/No-communication_theorem
What about this https://en.wikipedia.org/wiki/Quantum_Experiments_at_Space_S... ?
No. That does not allow faster than light communication (which is impossible)
FTL communication is presumed to be impossible, it actually hasn't been proven impossible.
On the other hand, if it were shown to be possible it would be rather disruptive to many other presumptions in physics.
People are fairly attached to causality.
Well that's just it, my understanding is that FTL hasn't been proven to violate causality, or that causality is inviolable. It's just very strongly hinted at.
In special relativity at least it's pretty clearly the case that communication outside the light cone (so faster than light) will result in events happening in the wrong order in some frames, violating causality. I will not speak of general relativity, as while I've taken a course in it, years later I have returned to considering it largely dark magic.
Supposing you transmit a message to me at a prearranged time, a number. At that prearranged time I pick a number at random, and act as if it is your message.
When I eventually get your message some time later, if it turns out my random pick was wrong, I kill myself. If the many worlds interpretation is right, I should only observe universes in which I’be managed to conjure up your message faster than causality, right?
> If the many worlds interpretation is right, I should only observe universes in which I’be managed to conjure up your message faster than causality, right?
I feel that's pairing MWI with some non-physical (or at least beyond the wave function) overarching "I" that can see across or jump between branches of the wave function, whereas I'd claim the appeal of embracing MWI is largely that the universe's wave function is all there is and observers/consciousness play no special role (along with not having nonlocal random "collapses"). The experiment would be no different than gathering a bunch of people, assigning each a number, then killing the ones that were assigned the wrong number once the real number arrives.
It isn’t any jumping, just from an individual’s point of view they can’t have been somebody who ended up dying.
Long term, sure. Short term I think an unpleasant number of your parallel universe copies would observe themselves dying.
star wars except its comcast 'accidentally' destroying starlink sattelite links with 'debris'
I was at UCLA with Vint Cerf ... very cool guy.
"We work to extend terrestrial networking into solar system space..."
Minor nitpick: it's the Solar System - i.e. capitalized (since it's a proper name). The Solar System is the planetary system that we reside in, the one that has the star Sol at its center.
https://en.wikipedia.org/wiki/Solar_System#Definition
> When not used as a proper noun and written without capitalization, "solar system" may refer to either the Solar System itself or any system reminiscent of the Solar System.[14]
They're solving for other solar systems too!