Request an Invite to the Inveniam AI alpha Release →

May 21, 2023

Digital Private Markets Need Data Symmetry

Digital Private Market will need equal access to data on the underlying asset (or assets in a fund), this is referred to as data symmetry.

Digital Private Markets Need Data Symmetry

Digital Private Market will need equal access to data on the underlying asset (or assets in a fund), this is referred to as data symmetry. There can not be data asymmetry in the market or bidders will either low ball the bid or leave the market place entirely.

The secondary market for Private Market Assets today has two solutions, 1) if the stake is big enough they hire a banker to run a process and to ensure that there is level playing field among all participants, this takes time. Today Evercore is the leader in this space, and 2) a player like Coller Capital exploits the data asymmetry for their own benefit as they have better data on secondaries and can quickly deliver a bid and buy the secondary into their own fund.

To digitize real/private assets requires both a Digital Twin (or digital primary existence) of the asset (Inveniam) and a digital security (Tokeny or another tokenization provider) and that they be tightly fused for valuation and liquidity.

Prop 1: Data is better on the Edge; decentralized, secure, and GDPR compliant. Public markets utilize the Edgar data base. If people exploit non-public data this is insider trading.… for private markets we have created a Federated Data Room (this permissioned data browser does away with need for intermediate data solutions, and allows you to add logic to the process at any time, and surveil data at the edge). https://bit.ly/3Ixklc5

Prop 2: Data is better when current, true, and sufficient to the asset it describes…so we have Workflow, RPA, and file-level AI/validation, and cryptographic proof of origin of data and that this is the data used at time of calculation. https://bit.ly/45xJ6yX

Prop 3: Data needs meaning in order to be truly connected across applications; AI + humans need to identify (or repurpose) structure…so we have AI Assist. Alts data is mostly unstructured, and unformatted. Today we use AWS Textract, and GPT4 to pull data from these assets. For the past two years we have built a simple template creator to extract data, now we are just plugging more and more models, libraries, and methods to the back end of this, this is AI Assist.

Prop 4: Users will ever want to repackage, refilter, reconnect data to multiple compute functions on their own, with out our help.…so we are building a new Datalab.

Prop 5: Trusting edge & decentralized data requires decentralized proofs and a network of maximally click-through-able footnotes…this is the heart of https://bit.ly/45mpGgk, with bHub you have a single interface so you can manage a dozen chains.

Prop 6: Trusting the teams, processes, calculations, (re)structuring/ETL, requires decentralized Proof of Process….so we have metachain.

Apple PodcastsApple Podcasts