From the team…
To begin, we are opening the dev call to the public this Tuesday to chat about the info that follows. If you have time, please join us here in Discord at 1pm EST.
First, some background. After the fork, we determined the need to write our own stack as an abstraction on servlets that implement the core sia protocol. These are us, muse, walrus and shard. Our thing is tailoring an S3 gateway, customer authentication/billing, databasing for metadata and our own solution to asynchronous contract access. Also, we needed to implement our own renter which specifies desired contract sets, makes erasure coding and encryption calls and coordinates the upload/download of actual data from the customer to the Provider network.
It’s a very big job because in many ways, we are rewriting the largest section of sia coding AND we have to make compatible all the stuff that makes AWS great.
We completed the S3 gateway including multi-part uploads (a sticky item for everyone doing S3 compatibility) and started initial testing via Commvault using manual contracting and wallet operations. Small datasets worked nearly perfect but the Relayer/storage network choked on bigger data sizes/amounts. This led us to the creation of a microservice we call Hostio, which initially was built to offload the S3 logic from the renter. The module gives the renter time to deal with locking issues and over time is intended to be a completely stateless module which is useful at scale and for ongoing testing of feature rollout.
But we quickly realized that we would need to rewrite a whole set of methods making up a module called PseudoFS and it became clear that was probably wasted time. Instead, we decided to bite the bullet and flesh out Hostio fully so that it includes buffers and cacheing. It sits at the center of the Relayer and does the heavy lifting between customer upload/download tools and the Storage Provider network. It will provide capability for much greater scaling than we could achieve previously and clears the way for implementing all the other services as shown in this master diagram.
Spilling some secrets of our IP here but its worth it to show what is going on and what we are trying to do. The Consensus/Host box may change if we decide to implement lite configurations for our coming NAS appliance.
Part of the problem is the galaxy of S3 tools implement the protocol differently sending varied chunk sizes, implementing or not implementing multi-part, etc. The problem is when these get locked by our network, contract items like collateral (which we intend to call Insurance going forward) get muffed up and throw errors. Initial buffering allows us to group items in a smarter way sending appropriate data to be sharded, encrypted and uploaded vs smaller files which can go directly into the database until enough is reached to create a new segment for upload (cache).
Our work on Hostio also illuminates work we need to do into Q1 after we get the basic MVP out. How we will auto-provision Relayers for individual customers, wallet interactions and the API calls to exchanges to auto-purchase and seed wallet addresses, dynamic contracting through the contract server even including potentially something we might call Master Contracts.
The work on Hostio is slated to reach completion of the current refactoring this weekend and with a bit of luck, we should be cleared to start pumping bigger sets of test data and to onboard the first one or two beta testers. MVP can likely occur in January with all the original assumptions of a fair amount of flintstoning behind the scenes. We appreciate the Storage Provider and Community patience as we pushed through this enormous blocker. In many ways, the plane is on the runway now for the first of many successful test flights.