This commit is contained in:
Mykola Siusko 2023-10-30 16:15:57 +01:00 odevzdal GitHub
rodič a4172e17be
revize bfced6769e
V databázi nebyl nalezen žádný známý klíč pro tento podpis
ID GPG klíče: 4AEE18F83AFDEB23
1 změnil soubory, kde provedl 129 přidání a 0 odebrání

Zobrazit soubor

@ -0,0 +1,129 @@
# Product features
Here we prototype potential product features within our "l2beat for privacy" platform. All they are based on market survey available [here]((https://docs.google.com/spreadsheets/d/1JWpAsGL10UTsVeuIVbouzUxRjaSPUAamxcbFljXuUWE/edit?usp=sharing))
1. **Validity track** ("yes/no" applicability like if "docs" exist or not).
_Benefit_: easy to source manually & empower scoring model with the links.
_Downside_: the existence of Git repo or docs doesn't guarantee the state of privacy within a certain solution.
**Approach**: "validity track" serves as an MVP of privacy analytics, and challenges a culture of open-source delivery.
_Validity examples_ (yes, no):
- Github repo
- Docs
- Public team
- Third-party audit
This could be broken down into 100% (4 yes, each - 25%; where 2 no & 2 yes = 50%)
2. **101 educational materials**.
_Benefit_: significantly upgrades DYOR & flags main misconceptions about privacy execution plus 1 guide could serve all audiences
_Downside_: too many content pieces to write or assemble to encompass the whole privacy services & nuances spectrum (needed to be broken down into delivery phases)
**Approach**:
- select essential educational materials for the first audiences (example: crypto, but not privacy native instead of "non-crypto" first served)
- make 101 modular (block system) + composable (to be reused in other guides)
_Selected 101s_:
- privacy tech vocabulary
- privacy data leakages 101
- transaction traceability 101 (the basics on Etherscan example)
- web3 comms 101 (with a focus on misleading comms that create an obscure understanding of the tech/product)
The scoring model could consist of passive & active parts. When passive means what could be automated or req no subjective gaze, active - for personal consideration (research).
3. **Checklists** (example: trusted sources list).
_Benefit_: serves as a self-check navigation within the complex audit stream (and a predecessor of an actionable database)
_Downside_: there will be always something missing from the list
**Approach**:
- pick a direction (like "trusted sources")
- write down it as an actionable plan
- help a person spend less time on data aggregation (the easiest to follow checklist - the better)
_Checklist MVP_:
- **Storage**: What user information is stored? (username, IP address, last connection, wallets associate, etc) -> the less the better
- **Infra**: Number of nodes/servers/ -> the larger the footprint the best privacy
- **Signup**: no email or tel number for signup -> the less data the better
- **Traction**: number of people using it -> the more the better (with examples)
- **Public comms**: simplified socials analysis (for a negative sentiment)
- **Product-readiness**: test-net, mainnet; date of the release.
This is a mix of objective metrics & subjective takeaways (like an old product release can be a negative sentiment for privacy & vice versa). Case studies will help to empower these assumptions.
If we will collect good case studies lib -> it will help people to have market benchmarks for every step of scoring system.
4. **Academy** (content from case studies to third-party services to check net data, leakages).
_Benefit_: this will be the backbone of our platform in the future that will bridge the gap between lack of privacy-centric education, lack of third-party services to check-up solutions & poor transparency on behalf of privacy-devs.
_Downside_: usually takes too long time to deliver & many different contributors (should be simplified via tracks like Web3 Privacy 101).
**Approach**:
1. Stick with privacy 101 (basics).
2. Brake down the most crucial knowledge into simplified blocks.
3. Create a simplified & actionable "lecture framework" for guest lectors.
4. Invite lectors within a micro-learning format.
5. Cover basic 10 lectures via "Web3 privacy 101 introduction".
6. Deploy via videos + texts > test them out.
## _Long 2do list (filtered out product features)_
_Validity_
* documentation: exists/missing (+/-)
* 3rd party list (yes/no) from security audit agencies to independent security engineers (+their reputation 101 in later versions)
* website checklist (1-2-3-4-5..., yes/no validity)
* code audit (yes/no). Case studies (how those audits could look alike with active links)
_Education track 101_
- privacy tech vocabulary
- privacy data leakages 101
- transaction traceability 101 (the basics on Etherscan example)
- web3 comms 101 (with a focus on misleading comms that create an obscure understanding of the tech/product)
* open source 101 (short version)
* e2ee 101
* social recovery 101
* missing 3rd party audit potential risks 101
* simplified media analysis 101
* trust assumptions 101
* privacy in ZK 101
* privacy data leakages 101
* "attacks" (threats) 101
* encryption 101
* transaction traceability 101 (the basics on Etherscan example)
* web3 docs 101 (basic description in relation to privacy)
* pubkey, network key, viewing key etc 101
* "Anti-features" 101
* did 101 (with a focus on the linkage between different data types)
* web3 comms 101 (with a focus on misleading comms that create an obscure understanding of the tech/product)
* security audits 101 (basics)
* web3 privacy 101 (focus on a misleading comms like blockchain security equals privacy)
* forkability x privacy 101
_Checklists_
* trusted sources checklist (&/or a list)
* web3 open-source checklist (soft screening like whitepaper (yes/no), docs (yes/no) etc)
* different self-check recommendations by security audits, white hackers as a supplement (how would you test product privacy)
* GitHub basic score (flag system could be a part of the second product release)
* a checklist of when you need a "tech" person help to attest privacy features & when not (with a focus on a second scenario)
_Additional content_
* case studies when solutions actively embed privacy education within the workflow
* List of external third-party resources in a Wiki (for those, who like to make extra effort -> influence micro-services creation)
* case-studies (privacy matching): protocol = transparent, but use-cases = private (like Ethereum)
* future: privacy features comparison within solutions (creating a comprehensive privacy market metrics set)
* later on - case studies when solutions actively embed privacy education within the workflow
* "net usage stats" services guide for the end user (but with a focus on empowering product managers to incorporate third-party tools to self-check their products & publicly report on their privacy features)
* "privacy stack" enhancement prototyping (1 solution + 1 solution = privacy "2x" (like Session messenger + dVPN)
* examples of how products enhance privacy creatively (like a game mentioned by Obscuro) -> potential market benchmarks
* part of the wiki: web3 privacy products biases (in relation to privacy features x user experience)
_DYOR section_
* useful links (like dVPN studies in a DYOR section)
_Automation (beta)_
* "DERP"-alike tool for the future privacy script-based "check-up" product releases
* web3-native IP-checker for a beta version
* web3-native privacy features checker to be R&D (what could be automated & coded for QA-automation)
_Future releases_
* in the future: collab with a security audit company to create a new docs audit service description when a third party will attest privacy features of the initial idea x tooling description