In this week’s core team meeting, we made significant progress on two fronts:
Update Propagation Over the Network Updates are now successfully propagating across Freenet peers, a key milestone toward stable
multi-node operation. Ignacio and Ector have been working together to finalize this, and although
there may still be an edge case affecting update propagation through intermediary peers, once
confirmed, we expect to release a new version shortly.
This week’s developer meeting highlighted steady progress toward Freenet’s upcoming release. Key
milestones include successful testing of peer-to-peer connections, including hole-punching, which
allows nodes to communicate seamlessly across firewalls. This feature is critical for maintaining
decentralized connectivity and performed well during tests.
The team has implemented an automated configuration system, enabling nodes to download gateway
settings directly, streamlining the setup process. Ignacio and Hector demonstrated scripts for
quickly setting up nodes and gateways, simplifying deployment for developers and testers. Once
released, most users will be able to get started with a single cargo install command.
The Challenge of Consistency in Distributed Systems
Achieving consistency across distributed systems is a notoriously difficult problem. The key reason
is that, in a distributed environment, multiple nodes can independently make changes to the same
piece of data. When different nodes hold different versions of this data, deciding how to reconcile
these differences without losing valuable updates or introducing conflicts becomes a complex
challenge.
Traditional approaches often require coordination mechanisms, such as consensus algorithms (like
Paxos or Raft), to ensure consistency. However, these methods can be resource-intensive, require
high communication overhead, and often struggle with scalability, especially when dealing with
frequent updates across many nodes. The famous CAP theorem even states that distributed systems
can only guarantee two of three properties—Consistency, Availability, and Partition Tolerance—at
any given time, making it hard to achieve strong consistency while keeping a system always available
and partition-tolerant.
In the 1960s psychologist Stanley Milgram conducted an influential experiment that revealed
something amazing about human relationships. Milgram chose people at random in cities like Kansas
and gave each a letter with the address of someone they didn’t know in Boson, Massachusetts. They
were instructed to get the letter to that person but only by sending it to someone they know
personally, who would send it to someone they know personally - and so on. Milgram repeated this
letter-sending experiment nearly 200 times. On average, these letters reached their target in just
six steps, this is where we get the term ‘six degrees of separation.’ Milgram’s findings
demonstrated that despite the vastness of the world, most individuals are only a few links away from
each other, highlighting the surprisingly small number of intermediaries connecting us all.
This Week’s Progress We focused on stabilizing network operations after recent updates. Most major issues have been
resolved, but a key challenge remains with the WebSocket API:
WebSocket Connection Stability:
Issue: WebSocket connections occasionally drop, particularly during contract updates. This
may be due to the lack of a keep-alive mechanism or another issue with how the client handles
connections.
Next Steps: Investigating whether periodic ping messages can prevent these disconnections.
The application and node will also be tested to ensure they handle connections robustly.
Significant cleanup has been done, focusing on resolving issues with the transport layer and
dependencies.
Transport layer is working well, and the remaining issues are expected to be fixed within the
next few days.
The handshake handler has been thoroughly tested, with only minor remaining issues that are
actively being addressed.
A new monitoring and logging tool is almost ready and will be integrated soon.
Next Steps:
Larger network simulations will be conducted to test Freenet’s behavior with more peers.
Live testing on a real network environment will verify peer-to-peer connectivity and hole
punching.
Final testing of key contracts (e.g., microblogging, mailing) is planned to ensure they work
correctly, though some may be revisited after the initial release.
Improved Connectivity: Recent changes have allowed peers to establish more connections, even
when multiple gateways are involved. Although not fully complete, connectivity between peers is
progressing well, and extensive testing will continue to ensure robustness.
UI Enhancements: A new UI is being developed to monitor and debug the network. This will aid
in integration testing, making it easier to identify and fix issues in real-time, and will be
helpful as we prepare for the release.
Network Operations Testing: Local testing of basic network operations (e.g., boot, update,
subscribe) has shown positive results, with most issues resolved. The next focus is on addressing
remaining test failures and improving reliability.
Transport Layer Improvements: The network joining through a gateway works fine with most
errors resolved. Nodes are acquiring connections, and the retry mechanism ensures successful
connections even when initial attempts fail.
Unit Test Success: Most transport layer unit tests are passing, with the system able to
establish connections after retries. Random packet drop simulations highlight some intermittent
failures, but overall functionality is stable.
Connection Debugging: Logs show nodes progressively acquiring connections over time. The team
is working on cleaning up the test environment for better debugging.
State Synchronization: Currently, when peers update their state, the entire state is sent
rather than just deltas. This approach is suboptimal, and the plan is to shift to delta updates
after the initial release.
Just a brief update this week as we work towards the alpha release.
Gateway Connection Handling
After a few more fixes, connections with gateways are now handled smoothly. Although transport may
still fail sporadically, we now appropriately retry connections, which resolves many of the previous
issues.
Regular Peer Connections
Regular peers can now connect with each other! While there is still some weirdness that we are
investigating, the connections are finally working as expected.
Peer-to-Peer Connectivity: Gateways and peers can now successfully connect and communicate
with each other. While there are still minor issues, the main structure is operational, and
communication between nodes works as expected.
Message Transmission: Messages can be sent between peers, and errors that do occur generally
resolve themselves as the system retries connections.
Unit Tests: Existing unit tests for peer connections are passing, indicating stability in
fundamental network components.
Telemetry & Logging: Improved logging and monitoring tools have made debugging easier, and
these tools will help spot issues quickly as the network grows.
In this interview, Michael from FUTO sits down with Ian Clarke to discuss the revolutionary concept
of Ghost Keys. They explore how these anonymous, verifiable identities could address some of the
Internet’s foundational flaws and delve into the future of decentralized, Cypherpunk-inspired
reputation systems.
On May 3rd, 1978, Gary Thuerk, a marketing manager at Digital Equipment Corporation, sent the first
spam email to 400 people. It was an invitation to a product demonstration for the DEC-20 computer,
and the reaction was immediate and negative.
Nearly 50 years later, this same flaw in the internet’s design has given rise to more significant
issues. Today, AI-driven bots not only overwhelm us with spam but also manipulate social and
political discourse at scale.
This week we’ve been focussed on a crucial refactoring task to improve how we manage network
connections. The goal is to make it easier to isolate bugs by separating the connection handling
logic from the transport layer so they can be tested independently.
What’s Changing:
Decoupling Connection Handling:
We’re separating the connection handling code from the transport layer. This change allows us to
test connection states on their own, without involving the transport mechanisms.
With this separation, we can emulate connections and test different states more accurately,
pinpointing problems faster.
Enhanced Testing and Debugging:
By isolating the connection handling, Nacho has created a series of unit tests to cover various
connection scenarios, such as establishing, rejecting, and accepting connections.
This approach helps us identify areas needing improvement and ensures our changes lead to a more
stable system.
Clearer Error Handling:
The refactor also simplifies error handling. By separating concerns, it’s easier to see if issues
come from the connection handling or the transport layer, making debugging more straightforward.
Streamlined Codebase:
We’ve removed redundant and tangled code, simplifying the codebase and reducing potential failure
points.
Next Steps:
Completing the Refactor:
Nacho is close to finishing this refactor. The plan is to replace all the old connection handling
code with the new modular implementation.
This change will make the system easier to maintain and test, setting us up well for future
enhancements.
Focusing on Transport Layer Issues:
Once the refactor is done, we’ll turn our attention to fixing any remaining transport layer
issues. With the connection handling logic isolated, identifying and addressing these issues
should be more manageable.
We’ll add more unit tests for the transport layer to cover all edge cases and ensure it works
reliably.
Preparing for the Next Release:
If the transport layer is stable after the refactor, we’ll move forward with a release. This
update will include the recent improvements and ensure our core network functionalities are solid.
Conclusion
This refactor should be the last step before launching the Freenet network. By modularizing the
connection handling, we can test more thoroughly and fix issues more quickly, leading to a more
stable platform.
Ian Clarke, the creator of Freenet, explains how we solve problems like efficiently finding data,
adapt to changing network conditions, and managing a peer’s resource usage. Q&A includes how Freenet
compares to other networks, the history of Freenet, and how Freenet adapts to geography.
Ian has been working on the Freenet chat system and shared a specification document. He decided to
focus on a web-based interface rather than a command-line interface due to ease of implementation.
A significant topic was the method for updating contracts within the network. Ian proposed a
replace contract field that allows for contract updates signed by the contract owner, similar to
HTTP 301 redirects.
Focus on Connection Stability and Transport Improvements
Our primary focus has been on enhancing the stability and functionality of the connection and
transport layers within Freenet. Ignacio has dedicated significant effort to address issues related
to the connect operation and transport mechanisms. We’ve identified and resolved several bugs,
ensuring that connections are maintained properly and cleaned up when lost. Although we haven’t
fully tested all scenarios, the connect operation is now functioning as expected.
In our latest dev meeting, we dove into the recent updates and challenges with the Freenet network
protocol. Ignacio shared the latest on our efforts to boost node connectivity and stability across
the network.
What’s New:
Stability and Bug Fixes: We spent a good chunk of this week squashing bugs to make network
connections more stable. Ignacio explained the technical hurdles we’re tackling to keep
connections between nodes reliable. It’s tricky but we’re making headway.
Integration and Testing: We’re knee-deep in integrating and manually testing the latest
changes. We’re also working toward getting CI passing. This is key to catching regressions early
and ensuring everything holds up under stress.
Gateway Improvements: We’ve made some solid progress on enhancing how gateways handle peer
connections, which are crucial for assimilating new nodes into the network.
What’s Next:
Ramping Up Testing: Now that we’ve nailed down most of the glaring issues, it’s time to push
harder with more comprehensive testing. We’re talking multiple nodes and gateways to really test
the limits of scalability and performance.
Tool Enhancements: We’re also planning to upgrade our network simulation tool, which is vital
for a clear and efficient view of what’s happening network-wide.
In this week’s Freenet developer meeting, Ian Clarke and Ignacio Duart discussed significant
advancements and remaining challenges before getting the network up and running. The primary focus
was on refining the connection and configuration processes within Freenet’s system. Key highlights
include:
Configuration Management: The developers have implemented a system to set and save default
configurations, which are crucial for initializing and maintaining stable operations after
restarts. The configuration files are currently managed using TOML due to its robust support in
Rust.
We had a detailed technical discussion focusing on various aspects of our project, Freenet. Here’s a
breakdown of the key points:
Contract Update Mechanism: We tackled the contract update mechanism, crucial for handling the
state (the associated data) of contracts in our key-value store. This involves understanding how
updates are initiated by applications, the merging of states, and the process of sending updates
to subscribers.
Update and Merge Process: We discussed the specifics of how updates work, particularly
focusing on the ‘put’ operation. The conversation clarified how ‘puts’ are handled differently
depending on whether the application is already subscribed to the contract. A ‘put’ doesn’t
necessarily need a ‘get’ first. It’s about merging states if the contract exists and managing
updates accordingly.
Zero-knowledge proofs are one of the most significant developments in cryptography since the
invention of public-key crypto in the 1970s. But what exactly are they, what are they useful for,
and what is their relevance to Freenet?
What Are Zero-Knowledge Proofs?
In essence, zero-knowledge proofs (ZKPs) allow one party to prove that they have some secret
information, but without revealing anything about the information itself. For example, you could
prove that you have some data that satisfies a certain condition like
hashing to a particular value, without revealing the
data itself.
Ian Clarke explains how to build apps on Freenet, including the basic components of Freenet’s
architecture like contracts, delegates, and user interfaces.
Traditional approaches to rate-limiting the creation of things like coins, tokens, and identities in
distributed networks rely on computational (proof-of-work) or financial barriers (proof-of-stake) as
a source of scarcity. While effective in some contexts, these methods are wasteful and unfairly
favor those with more resources.
What is Proof-of-Trust?
Proof-of-Trust offers an alternative by utilizing the scarcity of reciprocal trust between
individuals as the primary resource for rate-limiting various network activities. Unlike existing
models, Proof-of-Trust does not inherently favor participants with greater computational power or
financial means.
Every node in the Freenet network has a location, a floating-point value between 0.0 and 1.0
representing its position in the small-world network. These are arranged in a ring so positions 0.0
and 1.0 are the same. Each contract also has a location that is deterministically derived from the
contract’s code and parameters through a hash function.
The network’s goal is to ensure that nodes close together are much more likely to be connected than
distant nodes, specifically, the probability of two nodes being connected
should be proportional to 1/distance.