<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/"><channel><docs>https://blogs.law.harvard.edu/tech/rss</docs><title>Posts on ByteVagabond – Digital Tinkering &amp; Real-World Adventures</title><link>https://bytevagabond.com/post/</link><description>Recent content in Posts on ByteVagabond – Digital Tinkering &amp; Real-World Adventures</description><ttl>1440</ttl><generator>After Dark 10.1.0 (Hugo 0.126.1)</generator><language>en-US</language><lastBuildDate>Tue, 10 Feb 2026 18:47:12 UT</lastBuildDate><atom:link href="https://bytevagabond.com/post/index.xml" rel="self" type="application/rss+xml"/><item><title>I Analyzed 70 Startups' Codebases — The Ones With More Technical Debt Raised More Money</title><link>https://bytevagabond.com/post/technical-debt-startup-funding/</link><pubDate>Tue, 10 Feb 2026 11:00:00 UT</pubDate><guid>https://bytevagabond.com/post/technical-debt-startup-funding/</guid><description>This post distills the findings of my master's thesis at the University of Twente and TU Berlin. I built an automated pipeline that analyzed the codebases of 70 open-source, venture-backed companies across 146 funding periods during the ZIRP era (2009–2022). The full thesis, dataset, and source code are open: github.com/maxcodefaster/tdr-velocity-analysis. What follows is not anecdote — it's data. The Heresy Every software engineer has been taught the same gospel: technical debt is bad.</description><category domain="https://bytevagabond.com/categories/software-engineering">Software Engineering</category><category domain="https://bytevagabond.com/categories/startups">Startups</category><category domain="https://bytevagabond.com/categories/venture-capital">Venture Capital</category><category domain="https://bytevagabond.com/categories/technical-debt">Technical Debt</category><category domain="https://bytevagabond.com/categories/data-analysis">Data Analysis</category><content:encoded><![CDATA[ This post distills the findings of my master&#39;s thesis at the University of Twente and TU Berlin. I built an automated pipeline that analyzed the codebases of 70 open-source, venture-backed companies across 146 funding periods during the ZIRP era (2009–2022). The full thesis, dataset, and source code are open: github.com/maxcodefaster/tdr-velocity-analysis. What follows is not anecdote — it&#39;s data. The Heresy Every software engineer has been taught the same gospel: technical debt is bad. Pay it down. Refactor early. Don&amp;rsquo;t cut corners. The orthodoxy says shortcuts today become shackles tomorrow.
I believed this too. Then I spent a year measuring it.
I analyzed the actual codebases of 70 venture-backed startups — companies like GitLab, Sentry, Supabase, PostHog, dbt Labs, Grafana Labs, Elasticsearch, CockroachDB, n8n, Metabase, Temporal, and Strapi — across 146 funding periods. I measured their technical debt. I measured how fast they shipped. I checked whether they raised their next round.
The result that kept staring back at me: the startups with the most technical debt and the highest development velocity had the best funding outcomes. Period. A 60.6% success rate — higher than the &amp;ldquo;sustainable growth&amp;rdquo; companies doing everything by the book (57.5%). And the companies with the cleanest code? They had the lowest funding success rate in the entire dataset at 44.4%.
Software engineering&amp;rsquo;s most sacred cow might be wrong — or at least, not universally right.
What I Actually Measured Before the pitchforks come out, let me explain the methodology. I didn&amp;rsquo;t just eyeball repos and make vibes-based claims.
I built a fully automated analysis pipeline that does the following for each company:
Clones the primary product repository from GitHub Checks out the code at the exact commit state before each funding round — so we&amp;rsquo;re measuring what investors actually saw, not what the code looks like today Runs static analysis via the Qlty CLI platform (successor to CodeClimate) to calculate a Technical Debt Ratio (TDR) using COCOMO-based economic modeling Calculates development velocity using a composite metric inspired by the CNCF&amp;rsquo;s project velocity framework: code churn per author &#43; commits per author, normalized across the dataset Cross-references with Crunchbase and Tracxn to determine whether the company raised a subsequent funding round The Technical Debt Ratio is defined as:
TDR = Remediation Cost (effort to fix all detected issues) ÷ Development Cost (COCOMO estimate of total dev effort) Development Velocity is a normalized composite:
Velocity = 0.5 × (Code Churn per Author / Max across dataset) &#43; 0.5 × (Commits per Author / Max across dataset) Both metrics are normalized to enable cross-company comparison. The sample spans six industry categories — from developer tools to databases to security — and only includes companies that raised Series A during the 2009–2022 window with at least 36 months elapsed for subsequent round evaluation. The full list of all 70 companies is in the repository.
The dataset has 146 development periods from 70 companies. The overall funding success rate is 54.1% — a near-perfect split for analysis. The full pipeline and data are open-source. Reproduce it. Challenge it. That&amp;rsquo;s the point.
The Automated Analysis Pipeline: For each of the 70 companies, the pipeline clones the repo, checks out the commit state at each funding date, runs Qlty static analysis, calculates velocity metrics, and assembles the final dataset of 146 periods. Finding #1: Technical Debt Doesn&amp;rsquo;t Slow You Down This is the finding that breaks the mental model.
At the company level, the correlation between technical debt and development velocity is r = 0.056, p = 0.667. That&amp;rsquo;s not weak — it&amp;rsquo;s essentially zero. Technical debt explained 5.2% of the variance in development velocity. The other 95%? Team capability, tooling, architecture, development infrastructure — everything except how clean your code is.
Metric Value Pearson r 0.056 p-value 0.667 R² 0.052 N 70 companies Within the range of technical debt levels observed in venture-backed startups (mean TDR = 0.031), debt simply does not impose the &amp;ldquo;interest payments&amp;rdquo; that Cunningham&amp;rsquo;s famous metaphor predicts. The teams with messy code shipped just as fast as the teams with pristine repos.
This doesn&amp;rsquo;t mean technical debt is irrelevant — at extreme levels or in mature codebases with millions of lines, the story is probably different. But for startups in their first few years? The data says debt isn&amp;rsquo;t the bottleneck you think it is.
The bottleneck is everything else: how fast you hire, how good your tooling is, how quickly you make decisions, how little ceremony you impose. Code quality is a rounding error in the velocity equation.
Finding #2: The &amp;ldquo;Move Fast and Break Things&amp;rdquo; Crowd Was Right Here&amp;rsquo;s where it gets spicy. I classified every funding period into four strategic quadrants based on median splits of debt and velocity (inspired by Fowler&amp;rsquo;s Technical Debt Quadrant):
The Debt-Velocity Matrix: &amp;#39;Strategic Debt&amp;#39; (high debt &amp;#43; high velocity) achieves the highest funding success rate at 60.6%, beating &amp;#39;Sustainable Growth&amp;#39; (low debt &amp;#43; high velocity) at 57.5%. Quadrant Description N Success Rate Strategic Debt High Debt &#43; High Velocity 33 60.6% Sustainable Growth Low Debt &#43; High Velocity 40 57.5% Technical Burden High Debt &#43; Low Velocity 40 52.5% Premature Optimization Low Debt &#43; Low Velocity 33 45.5% Read that table again. The companies doing everything &amp;ldquo;right&amp;rdquo; — clean code, high velocity — came in second. The companies breaking all the rules but shipping fast came in first.
And the companies with the cleanest code and slowest velocity? Dead last. Premature optimization isn&amp;rsquo;t just the root of all evil in code — it&amp;rsquo;s the root of all evil in startup strategy.
The velocity premium is consistent regardless of debt level. Among low-debt companies, high velocity adds &#43;12.0 percentage points to success rates. Among high-debt companies, it adds &#43;8.1 pp. Velocity dominates everywhere.
Finding #3: Velocity Has a Monotonic Relationship With Funding Success When I break development velocity into quartiles, the pattern is unmistakable:
Velocity Quartile Success Rate Avg Velocity Q1 (Lowest) 47.2% 0.029 Q2 50.0% 0.063 Q3 50.0% 0.117 Q4 (Highest) 68.4% 0.285 A clean, monotonic increase. More velocity → more funding. &#43;21.2 percentage points from bottom to top quartile.
Now look at what happens when you do the same with technical debt:
TDR Quartile Success Rate Avg TDR Q1 (Lowest Debt) 44.4% 0.006 Q2 61.1% 0.021 Q3 52.8% 0.050 Q4 (Highest Debt) 57.9% 0.165 No clear pattern. And the lowest debt quartile has the worst success rate. At least in this dataset: velocity is the signal, debt is noise.
This Isn&amp;rsquo;t a Fluke — It&amp;rsquo;s Robust Across Frameworks One valid criticism of median-split analysis is that it&amp;rsquo;s sensitive to where you draw the line. So I tested the findings across three different classification schemes:
Median Split (2×2): Strategic Debt wins at 60.6%.
Tertile Framework (3×3): Low Debt &#43; High Velocity hits 73.3%. High Debt &#43; High Velocity hits 68.4%. High Debt &#43; Medium Velocity hits 72.7%. The velocity effect holds everywhere.
Quartile Extremes (bottom 25% vs. top 25%): The gap is staggering — Very High Debt &#43; Very High Velocity achieves 66.7% success, while Very Low Debt &#43; Very Low Velocity drops to 30.0%. That&amp;rsquo;s a 36.7 percentage point spread.
Cross-Framework Validation: The velocity advantage holds across all analytical approaches. This isn&amp;#39;t a statistical artifact — it&amp;#39;s a pattern. Every framework tells the same story. The velocity advantage is real, consistent, and robust.
Why? The Temporal Arbitrage Theory Why would taking on debt and shipping fast beat doing things &amp;ldquo;right&amp;rdquo;? I propose a framework I call temporal arbitrage — systematically trading future development efficiency for immediate execution capability when current capacity is more valuable than equivalent future capacity.
Three mechanisms make this work in venture-backed environments:
1. Information Asymmetry
VCs don&amp;rsquo;t audit codebases. They can&amp;rsquo;t tell if your architecture is clean or a disaster. What they can observe is momentum: feature launches, user growth, commit velocity, product iteration speed. In a world of imperfect information, velocity is the strongest signal investors can actually see. Development velocity becomes a composite proxy for team capability, coordination quality, and market responsiveness — none of which are visible by reading source code.
2. The ZIRP Subsidy
Cunningham&amp;rsquo;s debt metaphor assumes you pay your own interest. But during the Zero Interest-Rate Policy era (2009–2022), venture capital effectively subsidized the interest payments on technical debt. When your next funding round covers the cost of refactoring, and the penalty for being slow to market is death, the rational play flips: accumulate debt, ship faster, raise the next round, then clean up. The metaphor breaks when someone else pays your interest.
3. Dynamic Capabilities as Compound Advantage
High-velocity teams don&amp;rsquo;t just ship faster — they learn faster, attract better talent, and generate higher investor confidence. Each advantage compounds into the next. A team that can iterate quickly builds better products through faster feedback loops, which drives user growth, which improves fundraising outcomes, which funds more development. Velocity is a flywheel, not a line item.
The Uncomfortable Implication Let me be direct about what this data suggests for practitioners:
For founders and CTOs: You may be over-investing in code quality at the expense of market speed. Within typical startup debt ranges, the data suggests debt is not your bottleneck — velocity is. It&amp;rsquo;s worth asking whether your engineering investments are going into velocity enablers (deployment automation, rapid experimentation infrastructure, team capability) or into code polish that the market doesn&amp;rsquo;t reward.
For engineering leaders: The R² of 0.052 means that code quality explains almost nothing about how fast your team ships. If you want to move faster, look at your tooling, your decision-making processes, your hiring — not your linting rules.
For investors doing technical due diligence: In this dataset, development velocity was a stronger predictor of funding success than code quality. The monotonic relationship between velocity and success (47.2% → 68.4%) suggests it may deserve more weight in technical evaluations — at least during capital-abundant periods.
What This Is NOT Before this gets misread on Hacker News — let me be clear about what I&amp;rsquo;m not saying:
This is not a license to write garbage code. The &amp;ldquo;Strategic Debt&amp;rdquo; quadrant works because these teams are fast and accumulate debt as a side effect of speed. The &amp;ldquo;Technical Burden&amp;rdquo; quadrant (high debt, low velocity) has mediocre outcomes. Debt without velocity is the worst of both worlds.
This is not universal. These findings are specific to the ZIRP era (2009–2022), open-source VC-backed companies, and code-level debt measurable via static analysis. The post-ZIRP environment (higher rates, capital scarcity, profitability focus) may tell a completely different story — and that&amp;rsquo;s exactly the natural experiment we should be studying next.
This is not causal. The study is observational. It&amp;rsquo;s possible that high-capability teams simply tolerate more mess because they can, rather than debt enabling velocity. Reverse causality is a real concern. I also can&amp;rsquo;t control for founder experience, network effects, market timing, or competitive intensity.
Architectural debt is invisible here. Static analysis captures code-level issues — style violations, complexity, duplication. It doesn&amp;rsquo;t capture the architectural decisions that truly make or break scalability. A company with &amp;ldquo;clean&amp;rdquo; code-level metrics might harbor deep architectural problems, and vice versa.
The Bigger Picture The thesis title is &amp;ldquo;Technical Debt as a Strategic Trade-Off&amp;rdquo; and that framing is deliberate. The software engineering community has treated technical debt as an unconditional liability for three decades. This data suggests it&amp;rsquo;s actually a context-dependent strategic variable.
In capital-abundant environments with compressed competitive windows, the optimal strategy may be to accept debt, maximize velocity, and let external funding subsidize the costs. In capital-scarce environments with longer runways, the calculus probably reverses. The engineering principles don&amp;rsquo;t change — but the optimal application of those principles depends on the economic and competitive context you&amp;rsquo;re operating in.
Traditional software engineering wisdom was built for a world of stable, revenue-generating organizations maintaining long-lived systems. Venture-backed startups in a zero-rate environment may be playing a different enough game that the same rules don&amp;rsquo;t apply in the same way.
Of course, this study has real boundaries. It&amp;rsquo;s 70 companies — meaningful, but not massive. It&amp;rsquo;s open-source companies only, which may behave differently than proprietary startups due to community contributions and transparency dynamics. It captures code-level debt but not the architectural debt that arguably matters more. And most importantly, it&amp;rsquo;s specific to a macroeconomic era that has already ended. The post-ZIRP world — higher rates, capital scarcity, profitability mandates — could easily invert these findings. That&amp;rsquo;s the study I&amp;rsquo;d love to see someone run next.
Still, within its scope, the pattern is consistent enough to be worth taking seriously: during the ZIRP era, the entrepreneurs who thrived weren&amp;rsquo;t necessarily those who built it perfectly — they were those who built it first.
Reproduce This The complete analysis pipeline, dataset, and methodology are open-source:
Thesis: Available in full with all methodology details, statistical procedures, and appendices Source Code: github.com/maxcodefaster/tdr-velocity-analysis Dataset: 70 companies, 146 funding periods, all metrics included The pipeline clones repos, checks out historical states, runs static analysis, calculates TDR and velocity, and cross-references with funding outcomes. I&amp;rsquo;d genuinely welcome people forking the repo, running it on different samples, or testing the post-ZIRP cohort. The whole point of open-sourcing this was to make it challengeable.
This post is based on my master&amp;rsquo;s thesis &amp;ldquo;Technical Debt as a Strategic Trade-Off: An Empirical Analysis of Execution Speed and Funding Success in Venture-Backed Startups&amp;rdquo; completed at the University of Twente and TU Berlin. Thanks for reading — I&amp;rsquo;d love to hear your thoughts, especially from anyone sitting in the &amp;ldquo;Premature Optimization&amp;rdquo; quadrant right now.
]]></content:encoded></item><item><title>Building Enterprise AI: Hard-Won Lessons from 1200+ Hours of RAG Development</title><link>https://bytevagabond.com/post/how-to-build-enterprise-ai-rag/</link><pubDate>Mon, 28 Jul 2025 10:50:00 UT</pubDate><guid>https://bytevagabond.com/post/how-to-build-enterprise-ai-rag/</guid><description>The following blog post is an opinionated result of hundreds of hours of intense research, implementation and evaluation while developing an enterprise AI chat system (source code can be found here). While I do not remember every source and arxiv paper, most will be linked directly in the text. Now please fasten your seatbelts and enjoy your deep dive into modern AI RAG architecture. AI Apps Are Really Just RAG AI is being crammed into every application.</description><category domain="https://bytevagabond.com/categories/artificial-intelligence">Artificial Intelligence</category><category domain="https://bytevagabond.com/categories/machine-learning">Machine Learning</category><category domain="https://bytevagabond.com/categories/enterprise-software">Enterprise Software</category><category domain="https://bytevagabond.com/categories/data-engineering">Data Engineering</category><category domain="https://bytevagabond.com/categories/software-architecture">Software Architecture</category><content:encoded><![CDATA[ The following blog post is an opinionated result of hundreds of hours of intense research, implementation and evaluation while developing an enterprise AI chat system (source code can be found here). While I do not remember every source and arxiv paper, most will be linked directly in the text. Now please fasten your seatbelts and enjoy your deep dive into modern AI RAG architecture. AI Apps Are Really Just RAG AI is being crammed into every application. This guide will help developers understand what you actually need to build an AI app. Without debating whether the current state of cramming AI features into every existing app makes sense, we first need to understand what is meant by a text-based AI app.
In almost every commercial AI app, there&amp;rsquo;s no training of custom AI models. Instead, they use base models from big providers like OpenAI, Google, Anthropic, xAI, or open-source alternatives like Llama or Mistral. This is because training a model is highly resource-intensive, and state-of-the-art models have become commoditized - the gaps between models regarding intelligence, capabilities, and price are closing rapidly.
So if developing AI apps doesn&amp;rsquo;t typically involve building your own models, what do you need to do as a developer? The keyword is RAG (Retrieval Augmented Generation). Essentially, it means feeding base models the correct data and tuning requests to get desired answers. RAG isn&amp;rsquo;t new - it&amp;rsquo;s been around in data science for quite some time - but it has become a buzzword in the recent AI craze.
At its core, RAG involves two steps: Ingestion and Retrieval. This might seem trivial at first, but as with anything in data science, it&amp;rsquo;s always garbage in, garbage out. The AI space is evolving drastically - I can&amp;rsquo;t remember the last time I&amp;rsquo;ve seen any space evolve this fast. I&amp;rsquo;ve done the heavy lifting, scanning through dozens of papers and repositories while developing my own clone of enterprise applications like GleanAI, Zive, ChatGPT Enterprise, and Google Agentspace.
While there&amp;rsquo;s a lot of hocus pocus from snake oil vendors claiming their one magical solution will deliver enterprise RAG nirvana, the reality is far more nuanced. From my experience building production systems, it&amp;rsquo;s not any single technique that makes RAG work - it&amp;rsquo;s the thoughtful combination of proven methods that delivers results. The recommendations below sit on the Pareto frontier, balancing integration difficulty, performance, and cost. Ignore the shiny new papers promising 50% improvements on cherry-picked benchmarks; focus on techniques that actually work in messy enterprise environments with real users asking terrible questions.
Enterprise RAG Architecture: A comprehensive view of production-ready RAG systems, from heterogeneous data sources through sophisticated ingestion pipelines to hierarchical retrieval and intelligent response generation. Note the two-stage approach that reduces search scope while maintaining quality. Here&amp;rsquo;s what you need to know to become an AI developer:
Ingestion Streamlining Your Data You want your AI apps to know about your data, but that data comes in different forms and shapes. Let&amp;rsquo;s say you have Microsoft SharePoint, Notion, or Confluence spaces. Text generation AI works best when fed cleaned-up text content, so what do you do with all the PDFs, Office docs, or custom spec files?
You need a pipeline that converts heterogeneous data into a homogeneous format so your retrieval algorithm always finds the most relevant (top K) results. Base models work exceptionally well with markdown content - a format that&amp;rsquo;s plain but has enough rich text elements and strong hierarchical features. There are different markdown flavors, but the most popular in terms of features is GFM (GitHub Flavored Markdown).
In my enterprise app, I have several converters to convert most file types (PDFs, Office docs, images) and custom enterprise apps (Notion, Confluence) in a plug-and-play way to markdown. The code includes integrations for SharePoint, Notion, Confluence, Jira &amp;amp; Google Drive via OAuth, PDF to GFM Markdown via Gemini 2.5 Flash, and Office Files (PowerPoint, Word, etc.) to GFM Markdown via Gotenberg.
Chunking Once you have your cleaned data in markdown format, you need to tackle the next challenge: LLMs have limited context window sizes, meaning you typically can&amp;rsquo;t cram all your data into the API call. While context windows have grown enormous, it still doesn&amp;rsquo;t make sense to push everything in there for two main reasons:
Cost factor - Even with token caching, caching makes reflecting data changes non-viable Retrieval performance - Performance degrades with bigger contexts, leading to more hallucinations Therefore, we need to &amp;ldquo;chunk&amp;rdquo; our data and optimize it for retrieval to feed the LLM with relevant information. Chunking is a meta-science in itself, with different approaches varying in complexity, cost, and performance implications:
Fixed Size Chunking - Splits text into chunks of specified character count, regardless of structure Recursive Chunking - Divides text hierarchically using separators, recursively splitting until reaching desired sizes Document-Based Chunking - Splits content based on document structure (markdown headers, code functions, table rows) Semantic Chunking - Uses embeddings to group semantically related content together Agentic Chunking - Employs LLMs to intelligently determine chunk boundaries Moving from top to bottom, we see increasing sophistication and performance, while Document-Based Chunking sits at the Pareto frontier. In my testing, this approach is perfect, especially since our data is already in GFM format.
The Context Problem
There&amp;rsquo;s one significant issue: keeping context. Imagine this scenario: We have text about Berlin and chunk it into different paragraphs, but lose context to the overall topic. When I say &amp;ldquo;it&amp;rsquo;s more than 3.85 million&amp;hellip;&amp;rdquo; what is &amp;ldquo;it&amp;rdquo; referring to? We lose context in this chunk.
Context Loss in Traditional Chunking: Individual chunks lose connection to the document&amp;#39;s main subject, making &amp;#39;it&amp;#39;s more than 3.85 million&amp;#39; ambiguous without knowing the document discusses Berlin&amp;#39;s population [https://jina.ai/news/late-chunking-in-long-context-embedding-models/] The Solution: Context Path Breadcrumbs
Here&amp;rsquo;s where we can apply a clever &amp;ldquo;hack&amp;rdquo; utilizing our markdown conversion. We create the abstract syntax tree (AST) from the markdown document hierarchy and prepend it as breadcrumbs to every chunk.
// Example: If our paragraph is in &amp;#34;Prehistory of Berlin&amp;#34; (H3) // under &amp;#34;History&amp;#34; (H2) in document &amp;#34;Berlin&amp;#34; (H1) const contextPath = &amp;#34;Berlin &amp;gt; History &amp;gt; Prehistory of Berlin&amp;#34;; const chunkWithContext = `${contextPath}\n\n${chunkContent}`; This isn&amp;rsquo;t completely solved yet, as our document could live in a folder structure. To overcome this, we apply the same solution at the folder/file level: build a hierarchical path based on folder structure and prepend it to the markdown AST.
The complete GFM context path chunker with all its logic, including finding optimal chunk size and truncating context paths when they get too long, can be found here. This includes incremental updates and hierarchy management.
Embeddings So far, we&amp;rsquo;ve cleaned and split our content while preserving context. We could use a simple text search algorithm to find relevant chunks. If I search for &amp;ldquo;inhabitants Berlin,&amp;rdquo; we&amp;rsquo;d probably get relevant results. But what about searching &amp;ldquo;inhabitants capital Germany&amp;rdquo;? With basic text search, we&amp;rsquo;d get no results.
That&amp;rsquo;s where embeddings come into play - one of the key concepts of AI RAG data handling.
What are embeddings?
Embeddings are numerical representations of text that capture semantic meaning in high-dimensional vector space. Instead of matching exact words, embeddings allow us to find conceptually similar content - so &amp;ldquo;inhabitants capital Germany&amp;rdquo; would match chunks about &amp;ldquo;Berlin population&amp;rdquo; because the AI understands these concepts are related.
Choosing the Right Embedding Model
The quality of your embeddings depends heavily on the model you choose. The MTEB (Massive Text Embedding Benchmark) leaderboard is your go-to resource for comparing embedding models across different tasks. It evaluates models across 8 different task types including retrieval, classification, clustering, and semantic textual similarity.
The Chunk Size Dilemma
Chunk size significantly impacts retrieval quality - include too much and the vector loses specificity, include too little and you lose context. There&amp;rsquo;s no &amp;ldquo;one size fits all&amp;rdquo; solution. Most developers start with chunk sizes between 100-1000 tokens, but the recommended maximum is around 512 tokens for optimal performance.
The Size Bias Problem
Longer texts generally show higher similarity scores when compared to other embeddings, regardless of actual semantic relevance. This means you can&amp;rsquo;t use cosine similarity thresholds to determine if matches are actually relevant.
Late Chunking: The Game Changer
This is where Late Chunking comes in as a breakthrough approach. Instead of chunking first then embedding, late chunking flips this process:
// Traditional approach const chunks = chunkDocument(fullDocument); const embeddings = chunks.map(chunk =&amp;gt; embed(chunk)); // Individual embeddings // Late chunking approach const chunks = chunkDocument(fullDocument); const contextualEmbeddings = embed(chunks); // Embed all chunks together as array // This creates embeddings that consider inter-chunk relationships and context This preserves broader document context within each chunk since the embeddings were created considering the full text, not just isolated segments. When you split text like &amp;ldquo;Its more than 3.85 million inhabitants make it the European Union&amp;rsquo;s most populous city&amp;rdquo; from a Berlin article, traditional chunking loses the connection to &amp;ldquo;Berlin&amp;rdquo; mentioned earlier, but late chunking preserves this context.
For an implementation of late chunking, take a look here.
Hybrid and Hierarchical Indexed Database Schema We&amp;rsquo;ve prepared our data to be stored in the database and later retrieved. AI RAG databases usually need the embedding field type. There are many paid embedding databases, but usually the database you already have is the best database. If you&amp;rsquo;re working with PostgreSQL, you can use an extension called pg_vector.
Dedicated Vector Databases:
Pinecone - Managed vector database with excellent performance and scaling Weaviate - Open-source vector database with GraphQL APIs Qdrant - Rust-based vector search engine with filtering capabilities Chroma - Developer-friendly open-source embedding database Milvus - Cloud-native vector database built for scalable similarity search Traditional Databases with Vector Support:
Supabase - Hosted PostgreSQL with built-in pgvector support Redis - In-memory database with vector search capabilities Elasticsearch - Search engine with dense vector field support MongoDB Atlas - Document database with vector search functionality While dedicated vector databases offer optimized performance, I&amp;rsquo;ve found that extending your existing database with vector capabilities often provides the best balance of simplicity, cost, and performance for most enterprise applications. You avoid data synchronization issues, leverage existing backup and security infrastructure, and can perform complex queries that combine vector similarity with traditional filters.
Enterprises can have vast amounts of documents in different data silos. This can be challenging for a RAG system to always retrieve the most relevant chunks. Hierarchical Indexing is the approach of structuring your database schema to have multiple levels for your documents.
-- Parent level: document metadata and summary CREATE TABLE documents ( id bigserial PRIMARY KEY, title text NOT NULL, summary text NOT NULL, summary_embedding vector(1024) NOT NULL, -- ... other fields ); -- Child level: document chunks with foreign key CREATE TABLE document_chunks ( id bigserial PRIMARY KEY, content text NOT NULL, embedding vector(1024) NOT NULL, document_id bigint REFERENCES documents(id) ON DELETE CASCADE, -- ... other fields ); This helps us retrieve in a first step all possibly relevant documents, then only search over the chunks of those documents. The summary can be generated via a low-cost model such as Gemini Flash Lite. A prompt for generating such a summary can be found here.
While embeddings are great, multiple papers have proven that a mixture of classic text corpus search such as BM25 or even simpler n-gram search yield significantly better results. That&amp;rsquo;s why on our hierarchical indexed database structure we also create a TokenNgram index.
Retrieval We&amp;rsquo;ve finally reached the last puzzle piece: the retrieval step. We have cleaned and saved our data, and now it&amp;rsquo;s time to create a sophisticated search algorithm. This is where the magic happens - turning a user&amp;rsquo;s messy query into precise, relevant results.
HyDE - Making Queries Smarter The first technique that&amp;rsquo;s absolutely game-changing is HyDE (Hypothetical Document Embeddings). Instead of just embedding the user&amp;rsquo;s query directly, we generate a hypothetical answer to what the user is asking, then use that answer&amp;rsquo;s embedding for retrieval.
Why? Because user queries are often short and ambiguous (&amp;ldquo;quarterly results&amp;rdquo;), while the documents they&amp;rsquo;re looking for contain full, detailed content.
// Traditional approach const queryEmbedding = await embed(&amp;#34;quarterly results&amp;#34;); // HyDE approach const hydeAnswer = await llm.generate(&amp;#34;What would quarterly results contain?&amp;#34;); // -&amp;gt; &amp;#34;The quarterly results include revenue of $2.3M, profit margins of 15%...&amp;#34; const hydeEmbedding = await embed(hydeAnswer); HyDE was introduced by researchers at CMU and consistently outperforms standard query embedding across most domains. In my implementation, I generate the HyDE response using a lightweight model like Gemini Flash, then embed both the original query and the hypothetical answer.
Hierarchical Document Retrieval Here&amp;rsquo;s where our database design pays off. Instead of throwing embeddings at the wall and hoping for the best, we use a two-stage hierarchical hybrid search:
// Stage 1: Document-level candidate filtering (hybrid search) const documentCandidates = await client.rpc(&amp;#39;match_documents_hierarchical&amp;#39;, { query_embedding: queryEmbedding, // For document summary similarity hyde_embedding: hydeEmbedding, // For chunk-level scoring query_text: queryText, // For full-text search // ... other params }); // The SQL function performs sophisticated two-stage filtering: // 1. Document filtering with OR condition: // - Vector similarity: (1 - (summary_embedding &amp;lt;=&amp;gt; query_embedding)) &amp;gt; threshold // - Full-text search: document.content &amp;amp;@~ query_text // 2. Chunk scoring from candidate documents with hybrid scoring: // - Semantic: (1 - (chunk.embedding &amp;lt;=&amp;gt; hyde_embedding)) // - Keyword: pgroonga_score when chunk.content &amp;amp;@~ query_text The actual implementation is far more nuanced:
Stage 1 - Document Candidate Selection:
Uses query embedding against document summary embeddings for semantic similarity Simultaneously runs full-text search against the entire document content using PGroonga Documents qualify if they match EITHER condition (OR logic) This dramatically reduces the search space while maintaining high recall Stage 2 - Chunk-Level Hybrid Scoring:
Uses HyDE embedding against individual chunk embeddings for precise semantic matching Runs full-text search against chunk content for exact keyword matches Combines both scores with proper normalization and weighting Only processes chunks from documents that passed Stage 1 filtering -- Simplified view of the actual SQL implementation WITH document_candidates AS ( SELECT d.id AS doc_id FROM documents d JOIN document_user_access ua ON d.id = ua.document_id WHERE ua.user_id = target_user_id AND ( (1 - (d.summary_embedding &amp;lt;=&amp;gt; query_embedding)) &amp;gt; similarity_threshold OR d.content &amp;amp;@~ query_text -- Full-text search on document content ) LIMIT doc_search_limit ), chunk_scores AS ( SELECT c.*, d.*, -- Semantic score using HyDE embedding (1 - (c.embedding &amp;lt;=&amp;gt; hyde_embedding)) AS semantic_score, -- Keyword score using PGroonga CASE WHEN c.content &amp;amp;@~ query_text THEN pgroonga_score(c.tableoid, c.ctid) ELSE 0 END AS keyword_score FROM document_chunks c JOIN documents d ON c.document_id = d.id WHERE c.document_id IN (SELECT doc_id FROM document_candidates) ) -- Score normalization and weighting happens here... This approach is inspired by hierarchical passage retrieval methods but optimized for real-world enterprise scenarios. The key insight is that most queries are looking for information from a small subset of documents, so we can massively reduce the search space without sacrificing quality. The hybrid approach at both levels ensures we capture both semantic similarity and exact keyword matches, which is crucial for enterprise search where users might search for specific terms, project names, or concepts.
Query Expansion and Self-Reflective RAG Raw user queries are often inadequate. &amp;ldquo;Meeting notes from last week&amp;rdquo; could mean anything. That&amp;rsquo;s where query expansion comes in. Before hitting the database, I use a lightweight LLM to:
Expand the query with related terms and synonyms Extract time filters (&amp;ldquo;last week&amp;rdquo; → specific date range) Identify the user&amp;rsquo;s intent and adjust search weights accordingly Generate better search keywords But here&amp;rsquo;s where it gets interesting - after the initial search, we don&amp;rsquo;t just stop. We implement a self-reflective RAG approach where the system evaluates its own search results:
// The complete orchestrated search pipeline async function createOrchestratedStream(query: string, userId: string) { // 1. Query Expansion with HyDE const expansion = await generateObject({ model: tracedExpansionModel, schema: queryExpansionSchema, messages: [{ role: &amp;#39;user&amp;#39;, content: `Expand query: ${query}` }] }); // 2. Initial hierarchical search const initialResults = await hierarchicalRetriever.hierarchicalSearch( expansion.expandedQuery, expansion.hydeAnswer, // Key: using HyDE for better retrieval { target_user_id: userId, embedding_weight: 0.7, fulltext_weight: 0.3, start_date: expansion.timeFilter?.startDate, end_date: expansion.timeFilter?.endDate } ); // 3. Self-reflective gap analysis const evaluation = await generateObject({ model: tracedGapAnalysisModel, schema: gapAnalysisSchema, messages: [{ role: &amp;#39;user&amp;#39;, content: gapAnalysisUserPrompt.format({ query: expansion.expandedQuery, initialSearchResults: JSON.stringify(initialResults) }) }] }); // 4. Follow-up searches if gaps identified let allResults = [...initialResults]; if (evaluation.needsAdditionalSearches) { const followupPromises = evaluation.informationGaps .filter(gap =&amp;gt; gap.importance &amp;gt;= 7) // Only high-importance gaps .slice(0, 2) // Max 2 follow-ups .map(gap =&amp;gt; hierarchicalRetriever.hierarchicalSearch( gap.followupQuery, gap.followUpHydeAnswer, searchParams ) ); const followupResults = await Promise.all(followupPromises); allResults = HierarchicalRetriever.combineResults([ initialResults, ...followupResults ]); } return allResults; } This is similar to the ReAct pattern but applied specifically to information retrieval. The magic number I&amp;rsquo;ve found is limiting this to 2 follow-up searches maximum - beyond that, you get diminishing returns and increased latency.
Hybrid Search That Actually Works Everyone talks about &amp;ldquo;hybrid search,&amp;rdquo; but most implementations are inadequate. Here&amp;rsquo;s what actually works:
Adaptive weighting based on query intent: By default, we bias toward semantic search (70% embeddings, 30% keyword) to capture documents with relevant meaning. However, during query expansion, we analyze the query to detect specific search patterns and dynamically adjust weights.
// Query expansion determines optimal search strategy const expansion = await expandQuery(query); const weights = expansion.weights || { semanticWeight: expansion.queryType === &amp;#39;specific_terms&amp;#39; ? 0.3 : 0.7, keywordWeight: expansion.queryType === &amp;#39;specific_terms&amp;#39; ? 0.7 : 0.3 }; const combinedScore = ( weights.semanticWeight * normalizedSemanticScore &#43; weights.keywordWeight * normalizedKeywordScore ); Query type detection:
Conceptual queries (&amp;ldquo;project status&amp;rdquo;, &amp;ldquo;team performance&amp;rdquo;) → favor semantic search Specific terms (&amp;ldquo;Project Phoenix&amp;rdquo;, &amp;ldquo;JIRA-1234&amp;rdquo;) → favor keyword search Mixed queries → balanced weighting Proper score normalization: You can&amp;rsquo;t just add cosine similarity and BM25 / PGroonga scores—they have completely different distributions. I normalize both to 0-1 ranges before combining.
Two-phase scoring: Embedding similarity for document-level filtering, then detailed hybrid scoring only for chunks from candidate documents. This keeps it fast while maintaining quality.
This adaptive approach ensures we get the conceptual relevance that embeddings excel at, while not missing exact matches for specific terminology that enterprises rely on.
Advanced Filtering and Metadata Magic Enterprise data isn&amp;rsquo;t just text - it has structure, permissions, timestamps, and context. My retrieval system handles:
Temporal filtering: Queries like &amp;ldquo;recent sales reports&amp;rdquo; automatically extract time ranges and filter documents by source_updated_at Permission-aware search: The document_user_access table ensures users only see results they&amp;rsquo;re authorized to access, all handled at the database level for performance Smart metadata filtering: Instead of rigid JSON matching, I implemented flexible metadata filters that handle type mismatches gracefully Source-aware retrieval: Different document sources can have different retrieval strategies and weights applied automatically Reranking - The Diminishing Returns After all this sophisticated retrieval, we still have one more potential trick: reranking. In earlier versions of my system, I used Jina&amp;rsquo;s reranking models to take top candidates and reorder them based on deeper semantic understanding.
Here&amp;rsquo;s the thing though - I&amp;rsquo;ve disabled reranking in the current version. While academic papers show impressive improvements, in practice with a well-tuned hierarchical search, the quality gains are marginal while the latency cost is significant. The hierarchical approach with proper hybrid scoring and query expansion gets you 90% of the way there, and reranking that extra 5-10% isn&amp;rsquo;t worth doubling your response time.
The key insight is that rerankers work best when initial retrieval is poor. But when your retrieval is already sophisticated, you hit diminishing returns fast. That&amp;rsquo;s the Pareto frontier in action - focus on getting the fundamentals right rather than adding expensive bells and whistles.
Performance Optimizations That Matter All of this sounds expensive, but with proper optimizations, it runs fast:
HNSW vector indexes for ~95% faster similarity searches compared to IVF indexes Materialized CTEs in PostgreSQL functions to avoid recomputing candidate sets Parallel processing of follow-up searches during self-reflective retrieval Smart caching of embeddings and query expansions The entire pipeline - from raw query to final ranked results - typically runs in under 2 seconds for enterprise datasets with millions of documents. Check out the full implementation of the hierarchical retriever here and the query orchestration logic here.
Why This Approach Works The hierarchical approach with self-reflection feels significantly more accurate than basic vector search implementations and performs well compared to what I&amp;rsquo;ve seen from enterprise search solutions. While I haven&amp;rsquo;t done formal benchmarking against commercial solutions like Glean or Microsoft Viva, the approach addresses the core problems I&amp;rsquo;ve observed with simpler RAG implementations.
The secret sauce isn&amp;rsquo;t any single technique - it&amp;rsquo;s the thoughtful combination of proven methods, optimized for the realities of enterprise data and user behavior. Most academic papers test on clean datasets with perfect queries. Real users ask terrible questions about messy data, and this system is built for that reality.
Conclusion The AI RAG space is rapidly evolving. There are thousands of papers, frameworks, and code repositories published every month. There are probably better ways to design a RAG system - from Microsoft&amp;rsquo;s GraphRAG to Salesforce&amp;rsquo;s &amp;ldquo;Next Gen RAG&amp;rdquo; - it always depends on your needs.
I&amp;rsquo;m a big fan of the Pareto frontier, so the content above focused on performance while being feasible for a solo developer or small team and easy on the wallet. The techniques presented here represent battle-tested approaches that deliver real-world results without requiring massive infrastructure investments.
The key takeaways:
Document-based chunking with context preservation beats simple fixed-size chunking Late chunking significantly improves embedding quality for enterprise content Hierarchical search with proper metadata handling scales better than flat vector search HyDE and query expansion dramatically improve query understanding Self-reflective RAG fills information gaps that single-pass retrieval misses Hybrid search combining semantic and keyword approaches outperforms either alone Thanks for reading, and I look forward to hearing from you.
]]></content:encoded></item><item><title>Reactive Data Sync: Mastering Concurrent Updates in Large-Scale Applications</title><link>https://bytevagabond.com/post/reactive-data-sync-mastering-concurrent-updates-in-large-scale-applications/</link><pubDate>Thu, 05 Sep 2024 10:50:00 UT</pubDate><guid>https://bytevagabond.com/post/reactive-data-sync-mastering-concurrent-updates-in-large-scale-applications/</guid><description>Introduction In the domain of complex software systems, particularly in the med-tech sector where data consistency and timely delivery are paramount, managing data synchronization across multiple layers of an application ecosystem presents unique challenges. These challenges are amplified when dealing with large-scale monorepos housing multiple interconnected applications and microservices. In this post, I&amp;rsquo;ll share my recent experience tackling a complex software architecture task that involved solving critical data consistency issues in a system handling sensitive data with high-frequency updates and potential conflicts.</description><category domain="https://bytevagabond.com/categories/software-architecture">Software Architecture</category><category domain="https://bytevagabond.com/categories/technology-trends">Technology Trends</category><category domain="https://bytevagabond.com/categories/digital-innovation">Digital Innovation</category><category domain="https://bytevagabond.com/categories/software-engineering">Software Engineering</category><category domain="https://bytevagabond.com/categories/industry-insights">Industry Insights</category><content:encoded><![CDATA[Introduction In the domain of complex software systems, particularly in the med-tech sector where data consistency and timely delivery are paramount, managing data synchronization across multiple layers of an application ecosystem presents unique challenges. These challenges are amplified when dealing with large-scale monorepos housing multiple interconnected applications and microservices. In this post, I&amp;rsquo;ll share my recent experience tackling a complex software architecture task that involved solving critical data consistency issues in a system handling sensitive data with high-frequency updates and potential conflicts.
The Problem: When Two Truths Diverge The ecosystem I was working with consisted of five applications sharing a significant amount of code, supported by a dozen microservices. At the core of our architecture were two critical components:
1. A synchronous store for optimistic updates
2. A persistent database layer in the client applications
This setup aimed to provide a responsive user experience while ensuring data persistence and integrity. However, as our ecosystem grew and user interactions became more complex, I encountered a perfect storm of issues:
1. Misalignment Between Store and Database The high frequency of writes and conflicts meant that our synchronous store and persistent database were often out of sync. This led to a situation where we no longer had a single source of truth, potentially causing data inconsistencies and loss of critical information.
Data Flow Misalignment Between Store and Database 2. Database Overwhelm Under heavy load, particularly during periods of numerous conflicts, our database would sometimes &amp;ldquo;give up,&amp;rdquo; ceasing to persist data altogether. This created a dangerous scenario where users believed their inputs were being saved (due to the store updating), while in reality, no data persistence was occurring.
3. Refactoring Constraints Different implementations using the central core service, leading to refactoring contraints The scale of our codebase, coupled with strict requirements, made it impractical to refactor all the different parts of our applications where high writes and conflicts were occurring. A solution was needed that could be implemented at the core layer without disrupting existing business logic or requiring extensive changes across our applications.
The Solution: Reactive Data Sync To address these challenges, I implemented a comprehensive solution at the base layer, specifically within our EntityStoreService. This architectural approach tackled the issues at their root without necessitating widespread changes across our applications. Let&amp;rsquo;s examine the key components of this solution.
Enhanced Data Flow Architecture 1. Queuing Mechanism The solution introduced a smart queuing system to manage updates more efficiently:
private updateQueue: Map&amp;lt;string, EntityQueuedOperation&amp;lt;T&amp;gt;&amp;gt; = new Map(); private updateSubject = new Subject&amp;lt;string&amp;gt;(); private queueOperation( id: string, operation: &amp;#39;update&amp;#39; | &amp;#39;remove&amp;#39;, document: Partial&amp;lt;T&amp;gt;, ): Promise&amp;lt;PouchDB.Core.Response&amp;gt; { return new Promise((resolve, reject) =&amp;gt; { const existingOperation = this.updateQueue.get(id); const initialQueueTimestamp = existingOperation?.initialQueueTimestamp || Date.now(); this.updateQueue.set(id, { id, operation, document, isProcessing: false, resolve, reject, initialQueueTimestamp, }); this.updateSubject.next(id); }); } This queue allows for batching updates and handling them more efficiently, reducing the likelihood of conflicts and database overwhelm. The architectural beauty of this approach lies in its ability to manage high-frequency updates at the core level, providing a buffer that smooths out the data flow across the entire application ecosystem.
2. Debouncing, Smart Processing, and Queue Flushing To further optimize the update process, a sophisticated mechanism combining debouncing, smart queue processing, and a flush queue feature was implemented:
private _setupSmartQueueProcessing(): void { this.updateSubject.pipe(debounceTime(DEBOUNCE_TIME_MS)).subscribe((id) =&amp;gt; this._processQueuedOperation(id)); timer(0, DEBOUNCE_TIME_MS) .pipe(filter(() =&amp;gt; this.updateQueue.size &amp;gt; 0)) .subscribe(() =&amp;gt; { const now = Date.now(); const entries = Array.from(this.updateQueue.entries()); for (const [id, operation] of entries) { if (now - operation.initialQueueTimestamp &amp;gt; MAX_QUEUE_TIME_MS) { this._processQueuedOperation(id); } } }); } This multi-faceted approach ensures that the database isn&amp;rsquo;t overwhelmed with rapid-fire updates while still maintaining responsiveness. The debouncing mechanism coalesces multiple updates to the same entity, reducing unnecessary processing.
The flush queue mechanism, represented by the timer and subsequent processing, acts as a safeguard against operations lingering in the queue for too long. If an operation exceeds MAX_QUEUE_TIME_MS in the queue, it&amp;rsquo;s forcefully processed, ensuring that no update is indefinitely delayed.
This architectural decision strikes a delicate balance between efficiency and timely data persistence, crucial in an environment where both system performance and data immediacy are paramount.
3. Robust Error Handling and Retries with Reconnection Strategy To address the issue of database failures, a comprehensive error handling and retry mechanism with an additional layer of resilience was implemented:
async executeWithRetry&amp;lt;T extends PouchContentBase&amp;gt;( operation: () =&amp;gt; Promise&amp;lt;PouchDB.Core.Response&amp;gt;, newDoc: T, ): Promise&amp;lt;PouchDB.Core.Response&amp;gt; { const pouchReconnectTimeout = setTimeout(() =&amp;gt; this.pouch.tryReconnection(), RETRY_TIMEOUT_MS * MAX_RETRIES); for (let attempt = 0; attempt &amp;lt; MAX_RETRIES; attempt&#43;&#43;) { try { const result = await operation(); clearTimeout(pouchReconnectTimeout); return result; } catch (error: any) { if (error.status === NOT_FOUND_ERROR_CODE) { clearTimeout(pouchReconnectTimeout); throw error; } if (error.status === CONFLICT_ERROR_CODE &amp;amp;&amp;amp; attempt &amp;gt;= CONFLICT_RESOLUTION_THRESHOLD) { clearTimeout(pouchReconnectTimeout); return await this.resolveConflict(newDoc as PouchDocument&amp;lt;T&amp;gt;); } if (attempt === MAX_RETRIES - 1) { clearTimeout(pouchReconnectTimeout); await this.pouch.tryReconnection(); throw new Error(&amp;#39;Operation failed after all retry attempts&amp;#39;); } await new Promise((resolve) =&amp;gt; setTimeout(resolve, RETRY_TIMEOUT_MS * attempt)); } } clearTimeout(pouchReconnectTimeout); throw new Error(&amp;#39;Operation failed after all retry attempts&amp;#39;); } This method ensures that temporary database issues don&amp;rsquo;t result in data loss, giving the system multiple opportunities to successfully persist data. The pouchReconnectTimeout serves as a final safety net. If operations consistently fail to complete within the allocated time, it triggers a forced reconnection to the database, essentially rebuilding the entire connection.
In designing this solution, I made a conscious effort to minimize the use of setTimeout and Promises. As explained in my previous post on the event loop, these Web APIs can quickly bloat memory, especially with intense write operations. By using them judiciously, I&amp;rsquo;ve created a more efficient system that can handle high-frequency updates without unnecessary memory overhead.
It&amp;rsquo;s worth noting that in extensive testing, including simulated high-load scenarios, this reconnection mechanism was not triggered even once. This speaks to the robustness of the primary error handling and retry logic. However, its presence provides an additional layer of resilience, crucial in an environment where data integrity is paramount.
The architectural foresight in implementing this &amp;ldquo;last gate&amp;rdquo; mechanism exemplifies a commitment to building a system that can gracefully handle even the most extreme edge cases, ensuring continuous operation and data consistency in critical scenarios.
4. Intelligent Conflict Resolution To tackle the complex issue of conflicts, a conflict resolution strategy was implemented:
private async resolveConflict&amp;lt;T extends PouchContentBase&amp;gt;(newDoc: PouchDocument&amp;lt;T&amp;gt;): Promise&amp;lt;PouchDB.Core.Response&amp;gt; { console.error(&amp;#39;Conflict detected, attempting to resolve&amp;#39;); const existingDoc = (await this.pouch.getDoc(newDoc._id)) as PouchDocument&amp;lt;T&amp;gt;; const mergedDoc = this.smartMerge(newDoc, existingDoc); return await this.pouch.putDoc(mergedDoc); } private smartMerge&amp;lt;T extends PouchContentBase&amp;gt;( newDoc: Partial&amp;lt;PouchDocument&amp;lt;T&amp;gt;&amp;gt;, existingDoc: PouchDocument&amp;lt;T&amp;gt;, ): PouchDocument&amp;lt;T&amp;gt; { const mergedDoc = { ...existingDoc }; Object.keys(newDoc).forEach((key) =&amp;gt; { if (key === &amp;#39;_id&amp;#39; || key === &amp;#39;_rev&amp;#39;) { return; } const newValue = newDoc[key as keyof PouchDocument&amp;lt;T&amp;gt;]; const existingValue = existingDoc[key as keyof PouchDocument&amp;lt;T&amp;gt;]; if (this.isScalar(newValue) || this.isScalar(existingValue)) { mergedDoc[key as keyof PouchDocument&amp;lt;T&amp;gt;] = newValue as any; } else if (Array.isArray(newValue) &amp;amp;&amp;amp; Array.isArray(existingValue)) { mergedDoc[key as keyof PouchDocument&amp;lt;T&amp;gt;] = this.mergeArrays(newValue, existingValue) as any; } else if (typeof newValue === &amp;#39;object&amp;#39; &amp;amp;&amp;amp; typeof existingValue === &amp;#39;object&amp;#39;) { mergedDoc[key as keyof PouchDocument&amp;lt;T&amp;gt;] = this.smartMerge( newValue as Partial&amp;lt;PouchDocument&amp;lt;T&amp;gt;&amp;gt;, existingValue as any as PouchDocument&amp;lt;T&amp;gt;, ) as any; } }); return mergedDoc; } This approach allows for intelligently merging conflicting documents, preserving important changes from both versions and reducing data loss. In a context where every piece of information can be crucial, this conflict resolution mechanism ensures that no critical updates are inadvertently overwritten.
5. Optimistic Updates with Rollback To maintain a responsive user experience while ensuring data integrity, an optimistic update mechanism with rollback capabilities was implemented:
public updateDoc(id: string, updates: Partial&amp;lt;T&amp;gt;): Promise&amp;lt;PouchDB.Core.Response&amp;gt; { console.debug(`[EntityStoreServiceBase.${this.type}] updateDoc`, id, updates); this.store.upsert(id, updates); return this.queueOperation(id, &amp;#39;update&amp;#39;, updates); } private async _handleFailedUpdate(operation: EntityQueuedOperation&amp;lt;T&amp;gt;, error: any): Promise&amp;lt;void&amp;gt; { console.error(`Handling failed ${operation.operation} for document ${operation.id}`); try { const latestDoc = await this.pouch.getDoc(operation.id); if (operation.operation === &amp;#39;update&amp;#39;) { this.store.upsert(operation.id, latestDoc as T); console.debug(`Successfully reverted and updated document ${operation.id} in store`); } else if (operation.operation === &amp;#39;remove&amp;#39;) { this.store.upsert(operation.id, latestDoc as T); console.debug(`Re-added document ${operation.id} to store due to failed removal`); } } catch (fetchError) { console.error(`Failed to fetch latest version of document ${operation.id}:`, fetchError); this.store.remove(operation.id); console.debug(`Removed document ${operation.id} from store due to failed operation and fetch`); } console.error(`Operation failed for document ${operation.id}:`, error); this.updateQueue.delete(operation.id); operation.reject(error); } This system allows for immediately updating the synchronous store for a snappy user experience, crucial in fast-paced environments, while queuing the actual database update. In case of failure, the store can be rolled back to maintain consistency with the persistent layer, ensuring that users always see accurate, up-to-date information.
The Impact: A More Robust and Responsive Ecosystem By implementing these solutions at the core layer of the application stack, several significant improvements were achieved:
1. Enhanced Data Consistency: The intelligent queuing and conflict resolution mechanisms dramatically reduced instances of data misalignment between the store and database, crucial for maintaining accurate records.
2. Improved Resilience: The retry logic and smart processing helped the system gracefully handle periods of high load and temporary database issues, ensuring continuous availability of critical data.
3. Maintained Responsiveness: The optimistic update system, combined with the queuing mechanism, allowed for a snappy user interface even during complex data operations, supporting efficient workflows for users.
4. Reduced Data Loss: The conflict resolution and rollback capabilities significantly decreased instances of unintended data loss during concurrent updates, safeguarding vital information.
5. Scalability: By implementing these solutions at the core layer, the performance and reliability of all applications were improved without requiring extensive refactoring of individual components, a crucial consideration in our regulated environment.
Conclusion Addressing data synchronization issues in a large-scale, multi-application environment is a complex challenge that requires a delicate balance of performance, reliability, and data integrity. By carefully analyzing the root causes of the problems and implementing a comprehensive solution at the core layer, I was able to dramatically improve the consistency and performance of the entire ecosystem.
The key architectural insights from this experience are:
1. Address systemic issues at the lowest common denominator when possible, allowing for widespread improvements without disruptive changes.
2. Implement intelligent queuing and processing to manage high-frequency updates, crucial in data-intensive applications.
3. Always plan for failure with robust retry and rollback mechanisms, ensuring data integrity even in edge cases.
4. Use optimistic updates judiciously to balance responsiveness with data accuracy, supporting efficient user workflows.
5. Invest in smart conflict resolution to preserve data in complex scenarios, critical when multiple users may update the same record.
While I&amp;rsquo;m proud of the robustness and elegance of this solution, I recognize that in the ever-evolving landscape of software development, continuous improvement and adaptation are necessary. I remain committed to refining this approach as new challenges emerge, always with the goal of supporting better user experiences through reliable, efficient technology.
By sharing this experience, I hope to contribute to the broader conversation on building resilient, scalable systems in critical domains. Remember, with the right architectural approach, even the most daunting technical challenges can be overcome, paving the way for innovations that can truly make a difference.
Bonus Treat: A Fresh Take on PouchDB with Reactive Goodness Here&amp;rsquo;s a little dessert for the curious developer: a sneak peek at how you might whip up a reactive PouchDB store from scratch. This approach blends the robustness of PouchDB with the zingy flavors of RxJS, creating a deliciously responsive data management solution. Consider it a recipe for your next project&amp;rsquo;s secret sauce:
First, let&amp;rsquo;s look at a service that implements a reactive PouchDB store:
import { Injectable } from &amp;#34;@angular/core&amp;#34;; import PouchDB from &amp;#34;pouchdb&amp;#34;; import PouchFind from &amp;#34;pouchdb-find&amp;#34;; import { BehaviorSubject, Observable, defer, from, of } from &amp;#34;rxjs&amp;#34;; import { map, tap, switchMap, filter, catchError, shareReplay, take, } from &amp;#34;rxjs/operators&amp;#34;; import { DocTypes, Document, AnyDocument, createDocument, addAttachment, Attachment, DocAttachmentKeys, NON_REPLICABLE_TYPES, getDocTypeFromId, } from &amp;#34;../models&amp;#34;; import { environment } from &amp;#34;src/environments/environment&amp;#34;; PouchDB.plugin(PouchFind); const preloadDocsTypes: DocTypes[] = [&amp;#34;session&amp;#34;, &amp;#34;user&amp;#34;]; @Injectable({ providedIn: &amp;#34;root&amp;#34;, }) export class PouchDBService { private db!: PouchDB.Database; private stores = new Map&amp;lt; DocTypes, BehaviorSubject&amp;lt;Map&amp;lt;string, AnyDocument&amp;gt;&amp;gt; &amp;gt;(); private dbReady$ = new BehaviorSubject&amp;lt;boolean&amp;gt;(false); private replications: { [key: string]: PouchDB.Replication.Sync&amp;lt;{}&amp;gt; } = {}; constructor() { this.initializeDatabase().catch((error) =&amp;gt; console.error(&amp;#34;Database initialization failed:&amp;#34;, error) ); } private get dbIsReady(): Observable&amp;lt;boolean&amp;gt; { return this.dbReady$.pipe( filter((ready) =&amp;gt; ready), take(1), shareReplay(1) ); } public getAll&amp;lt;T extends DocTypes&amp;gt;(type: T): Observable&amp;lt;Document&amp;lt;T&amp;gt;[]&amp;gt; { return this.dbIsReady.pipe( switchMap(() =&amp;gt; this.getStore(type)), map((store) =&amp;gt; Array.from(store.values())), catchError(() =&amp;gt; of([])), shareReplay(1) ); } public get&amp;lt;T extends DocTypes&amp;gt;( id: string ): Observable&amp;lt;Document&amp;lt;T&amp;gt; | undefined&amp;gt; { return this.dbIsReady.pipe( switchMap(() =&amp;gt; { const type = getDocTypeFromId(id); if (!type) { console.error(`Unable to determine document type for id: ${id}`); return of(undefined); } return this.getStore(type as T).pipe( switchMap((store) =&amp;gt; { const doc = store.get(id) as Document&amp;lt;T&amp;gt; | undefined; return doc ? of(doc) : this.retrieveDocumentFromDatabase&amp;lt;T&amp;gt;(id); }) ); }), catchError((error) =&amp;gt; { console.error(`Error retrieving document with id ${id}:`, error); return of(undefined); }), shareReplay(1) ); } public create&amp;lt;T extends DocTypes&amp;gt;( type: T, data: Partial&amp;lt;Document&amp;lt;T&amp;gt;&amp;gt;, attachment?: Attachment&amp;lt;T&amp;gt; ): Observable&amp;lt;Document&amp;lt;T&amp;gt;&amp;gt; { let doc = createDocument(type, data) as Document&amp;lt;T&amp;gt;; if (attachment) { doc = addAttachment( doc, attachment.key, attachment.data, attachment.contentType ); } return this.dbIsReady.pipe( switchMap(() =&amp;gt; from(this.db.post(doc as PouchDB.Core.PostDocument&amp;lt;Document&amp;lt;T&amp;gt;&amp;gt;)) ), map((response) =&amp;gt; ({ ...doc, _id: response.id, _rev: response.rev })), catchError((error) =&amp;gt; { console.error(`Error creating document of type ${type}:`, error); throw error; }), shareReplay(1) ); } public update&amp;lt;T extends DocTypes&amp;gt;( doc: Document&amp;lt;T&amp;gt;, attachment?: Attachment&amp;lt;T&amp;gt; ): Observable&amp;lt;Document&amp;lt;T&amp;gt;&amp;gt; { let updatedDoc = { ...doc, updatedAt: Date.now() }; if (attachment) { updatedDoc = addAttachment( updatedDoc, attachment.key, attachment.data, attachment.contentType ); } return this.dbIsReady.pipe( switchMap(() =&amp;gt; from(this.db.put(updatedDoc as PouchDB.Core.PutDocument&amp;lt;Document&amp;lt;T&amp;gt;&amp;gt;)) ), map((response) =&amp;gt; ({ ...updatedDoc, _rev: response.rev })), catchError((error) =&amp;gt; { console.error(`Error updating document: ${doc._id}`, error); throw error; }), shareReplay(1) ); } public getAttachment&amp;lt;T extends DocTypes&amp;gt;( docId: string, attachmentId: DocAttachmentKeys&amp;lt;T&amp;gt;, options: { rev?: string } = {} ): Observable&amp;lt;Blob | Buffer&amp;gt; { return this.dbIsReady.pipe( switchMap(() =&amp;gt; defer( () =&amp;gt; new Promise&amp;lt;Blob&amp;gt;((resolve, reject) =&amp;gt; this.db.getAttachment( docId, attachmentId as string, options, (err, blob) =&amp;gt; (err ? reject(err) : resolve(blob as Blob)) ) ) ) ), catchError((error) =&amp;gt; { console.error(`Error in getAttachment observable:`, error); return []; }), shareReplay(1) ); } public deleteAttachment&amp;lt;T extends DocTypes&amp;gt;( docId: string, attachmentKey: DocAttachmentKeys&amp;lt;T&amp;gt;, rev: string ): Observable&amp;lt;PouchDB.Core.Response&amp;gt; { return this.dbIsReady.pipe( switchMap(() =&amp;gt; from(this.db.removeAttachment(docId, attachmentKey as string, rev)) ), catchError((error) =&amp;gt; { console.error( `Error deleting attachment ${ attachmentKey as string } for document ${docId}:`, error ); throw error; }), shareReplay(1) ); } public delete&amp;lt;T extends DocTypes&amp;gt;( doc: Document&amp;lt;T&amp;gt; ): Observable&amp;lt;PouchDB.Core.Response&amp;gt; { return this.dbIsReady.pipe( switchMap(() =&amp;gt; from(this.db.remove(doc))), tap(() =&amp;gt; this.updateStore({ ...doc, _deleted: true } as AnyDocument)), catchError((error) =&amp;gt; { console.error(`Error deleting document: ${doc._id}`, error); throw error; }), shareReplay(1) ); } public clearAllData(): Observable&amp;lt;void&amp;gt; { return this.dbIsReady.pipe( switchMap(() =&amp;gt; from(this.destroyDatabase())), switchMap(() =&amp;gt; from(this.initializeDatabase())), tap(() =&amp;gt; this.stores.forEach((store) =&amp;gt; store.next(new Map()))), catchError((error) =&amp;gt; { console.error(&amp;#34;Error clearing all data:&amp;#34;, error); throw error; }), shareReplay(1) ); } public query&amp;lt;T extends DocTypes&amp;gt;( type: T, selector: PouchDB.Find.Selector, sort?: Array&amp;lt;string | { [propName: string]: &amp;#34;asc&amp;#34; | &amp;#34;desc&amp;#34; }&amp;gt; ): Observable&amp;lt;Document&amp;lt;T&amp;gt;[]&amp;gt; { return this.dbIsReady.pipe( switchMap(() =&amp;gt; from(this.db.find({ selector: { ...selector, type }, sort })) ), map((result) =&amp;gt; result.docs as Document&amp;lt;T&amp;gt;[]), tap((docs) =&amp;gt; docs.forEach((doc) =&amp;gt; this.updateStore(doc))), catchError((error) =&amp;gt; { console.error(`Error querying documents of type ${type}:`, error); return of([]); }), shareReplay(1) ); } public startReplication(userDBs: { [key: string]: string }): void { this.stopReplication(); for (const [userKey, userDBUrl] of Object.entries(userDBs)) { const parsedUrl = new URL(userDBUrl); const { username, password } = parsedUrl; const remoteUrl = [ environment.couchDBProtocol, username, &amp;#34;:&amp;#34;, password, &amp;#34;@&amp;#34;, environment.couchDBDomain, parsedUrl.pathname, ].join(&amp;#34;&amp;#34;); // Docs starting with &amp;#39;_&amp;#39; or in NON_REPLICABLE_TYPES are not replicated const filterFunction = (doc: any) =&amp;gt; !doc._id.startsWith(&amp;#34;_&amp;#34;) &amp;amp;&amp;amp; !NON_REPLICABLE_TYPES.includes(doc.type); this.replications[userKey] = this.db.sync(remoteUrl, { live: true, retry: true, filter: filterFunction, }); } } public stopReplication(): void { Object.values(this.replications) .filter((replication) =&amp;gt; replication) .forEach((replication) =&amp;gt; replication.cancel()); this.replications = {}; } private async initializeDatabase() { if (this.db) return; this.db = new PouchDB(&amp;#34;local-db&amp;#34;); // or any other name await Promise.all([ this.db.createIndex({ index: { fields: [&amp;#34;type&amp;#34;] } }), this.db.createIndex({ index: { fields: [&amp;#34;createdAt&amp;#34;] } }), ]); this.setupChanges(); await Promise.all( preloadDocsTypes.map((type) =&amp;gt; this.loadAllDocuments(type)) ); console.log(&amp;#34;Database initialized successfully&amp;#34;); this.dbReady$.next(true); } private async destroyDatabase(): Promise&amp;lt;void&amp;gt; { if (this.db) { await this.db.destroy(); this.db = null!; this.dbReady$.next(false); } } private setupChanges() { this.db .changes({ since: &amp;#34;now&amp;#34;, live: true, include_docs: true }) .on(&amp;#34;change&amp;#34;, (change) =&amp;gt; { if (change.doc) { this.updateStore(change.doc as AnyDocument); } }); } private getStore&amp;lt;T extends DocTypes&amp;gt;( type: T ): BehaviorSubject&amp;lt;Map&amp;lt;string, Document&amp;lt;T&amp;gt;&amp;gt;&amp;gt; { if (!this.stores.has(type)) { this.stores.set(type, new BehaviorSubject(new Map())); this.loadAllDocuments(type); } return this.stores.get(type) as unknown as BehaviorSubject&amp;lt; Map&amp;lt;string, Document&amp;lt;T&amp;gt;&amp;gt; &amp;gt;; } private updateStore(doc: AnyDocument) { const store = this.getStore(doc.type); const currentMap = new Map(store.value); if (doc._deleted) { currentMap.delete(doc._id); } else { currentMap.set(doc._id, doc); } store.next(currentMap); } private retrieveDocumentFromDatabase&amp;lt;T extends DocTypes&amp;gt;( id: string ): Observable&amp;lt;Document&amp;lt;T&amp;gt; | undefined&amp;gt; { return from(this.db.get&amp;lt;Document&amp;lt;T&amp;gt;&amp;gt;(id)).pipe( filter((doc) =&amp;gt; !!doc), tap((doc) =&amp;gt; this.updateStore(doc as AnyDocument)), catchError(() =&amp;gt; of(undefined)) ); } private async loadAllDocuments&amp;lt;T extends DocTypes&amp;gt;(type: T) { try { const result = await this.db.find({ selector: { type } }); const store = this.getStore(type); const newMap = new Map( result.docs.map((doc) =&amp;gt; [doc._id, doc as Document&amp;lt;T&amp;gt;]) ); store.next(newMap); } catch (error) { console.error(`Error loading documents of type ${type}:`, error); } } } And here&amp;rsquo;s the corresponding model file for modularity:
import { customAlphabet } from &amp;#34;nanoid&amp;#34;; // Define non-replicable document types export const NON_REPLICABLE_TYPES: DocTypes[] = [&amp;#34;session&amp;#34;]; // Define and export attachment keys for each document type export const AttachmentKeys = { user: { userImage: &amp;#34;walkImage&amp;#34; as const, }, session: {}, } as const; // Type helper for attachment keys export type AttachmentKeysType = typeof AttachmentKeys; export type DocAttachmentKeys&amp;lt;T extends DocTypes&amp;gt; = keyof AttachmentKeysType[T]; export type Attachment&amp;lt;T extends DocTypes&amp;gt; = { key: DocAttachmentKeys&amp;lt;T&amp;gt;; data: Blob | Buffer | string; contentType: string; }; // Define a type for the document schemas const DocSchemas = { user: { name: &amp;#34;&amp;#34; as string, user_uid: &amp;#34;&amp;#34; as string, }, session: { user_id: &amp;#34;&amp;#34; as string, user_uid: &amp;#34;&amp;#34; as string, name: &amp;#34;&amp;#34; as string, token: &amp;#34;&amp;#34; as string, refreshToken: &amp;#34;&amp;#34; as string, issued: 0 as number, expires: 0 as number, provider: &amp;#34;&amp;#34; as string, password: &amp;#34;&amp;#34; as string, userDBs: {} as { [key: string]: string }, }, } as const; // Infer types from the schemas export type DocTypes = keyof typeof DocSchemas; export type DocSchema&amp;lt;T extends DocTypes&amp;gt; = (typeof DocSchemas)[T]; // Base document type export interface BaseDocument&amp;lt;T extends DocTypes&amp;gt; { _id: string; _rev: string; _deleted?: boolean; type: T; createdAt: number; updatedAt: number; _attachments: Partial&amp;lt; Record&amp;lt;DocAttachmentKeys&amp;lt;T&amp;gt;, PouchDB.Core.AttachmentData&amp;gt; &amp;gt;; } // Create a type that combines BaseDocument with a specific schema export type Document&amp;lt;T extends DocTypes&amp;gt; = BaseDocument&amp;lt;T&amp;gt; &amp;amp; DocSchema&amp;lt;T&amp;gt;; // Infer the union type of all documents export type AnyDocument = Document&amp;lt;DocTypes&amp;gt;; // Helper function to create a new document export function createDocument&amp;lt;T extends DocTypes&amp;gt;( type: T, data: Partial&amp;lt;Document&amp;lt;T&amp;gt;&amp;gt; ): Omit&amp;lt;Document&amp;lt;T&amp;gt;, &amp;#34;_id&amp;#34; | &amp;#34;_rev&amp;#34;&amp;gt; { return { _id: data._id || generateId(type), type, createdAt: Date.now(), updatedAt: Date.now(), _attachments: {}, ...DocSchemas[type], ...data, } as unknown as Omit&amp;lt;Document&amp;lt;T&amp;gt;, &amp;#34;_id&amp;#34; | &amp;#34;_rev&amp;#34;&amp;gt;; } // Define a custom alphabet for nanoid (excluding similar-looking characters) const alphabet = &amp;#34;12345ABCDEF&amp;#34;; // Create a nanoid function with our custom alphabet const nanoid = customAlphabet(alphabet, 8); /** * Generates a unique, sortable, and type-specific ID * @param type The document type (must be a key of DocSchemas) * @returns A string ID in the format: `${type}_${YYYYMMDD}_${randomString}` */ function generateId&amp;lt;T extends keyof typeof DocSchemas&amp;gt;(type: T): string { const date = new Date(); const dateString = date.toISOString().slice(0, 10).replace(/-/g, &amp;#34;&amp;#34;); // YYYYMMDD const randomPart = nanoid(); return `${type}_${dateString}_${randomPart}`; } // Helper function to extract the document type from an ID export function getDocTypeFromId(id: string): DocTypes | undefined { const [type] = id.split(&amp;#34;_&amp;#34;); return Object.keys(DocSchemas).includes(type) ? (type as DocTypes) : undefined; } // Helper function to add an attachment export function addAttachment&amp;lt;T extends DocTypes&amp;gt;( doc: Document&amp;lt;T&amp;gt;, key: DocAttachmentKeys&amp;lt;T&amp;gt;, data: Blob | Buffer | string, contentType: string ): Document&amp;lt;T&amp;gt; { return { ...doc, _attachments: { ...doc._attachments, [key]: { content_type: contentType, data: data, }, }, }; } // Helper function to get an attachment export function getAttachment&amp;lt;T extends DocTypes&amp;gt;( doc: Document&amp;lt;T&amp;gt;, key: DocAttachmentKeys&amp;lt;T&amp;gt; ): PouchDB.Core.AttachmentData | undefined { return doc._attachments[key]; } This reactive PouchDB store implementation offers several advantages:
Reactivity: Changes to the database are automatically reflected in the observables, ensuring your UI always displays the most up-to-date data. Type safety: The use of generics and well-defined types ensures you&amp;rsquo;re always working with the correct document structures. Efficiency: By using Maps internally, lookups for individual documents are very fast (O(1) time complexity). Modularity: The separation of concerns between the store service and the document models makes the code easier to maintain and extend. Flexibility: The observable-based API integrates seamlessly with RxJS, allowing for powerful data transformations and combinations. This approach provides a solid foundation for building complex, reactive applications with PouchDB while maintaining clean, modular code. It&amp;rsquo;s particularly well-suited for projects that require offline-first capabilities or real-time synchronization between clients.
]]></content:encoded></item><item><title>Web UI Magic: Reactivity Fundamentals and Angular Best Practices</title><link>https://bytevagabond.com/post/web-ui-magic-reactivity-fundamentals/</link><pubDate>Thu, 27 Jun 2024 10:50:00 UT</pubDate><guid>https://bytevagabond.com/post/web-ui-magic-reactivity-fundamentals/</guid><description>Web development has become more sophisticated, but understanding the fundamentals and different approaches to reactivity can help you write better, more efficient applications. This post will explain how web engines work, explore various reactivity models in different frameworks, and provide performance tips specifically for Angular developers.
1. How Web Engines Work ELI5: The DOM and the JS Event Loop The DOM The Document Object Model (DOM) is essentially a tree structure representing your web page.</description><category domain="https://bytevagabond.com/categories/web-development">Web Development</category><category domain="https://bytevagabond.com/categories/technology-trends">Technology Trends</category><category domain="https://bytevagabond.com/categories/digital-innovation">Digital Innovation</category><category domain="https://bytevagabond.com/categories/software-engineering">Software Engineering</category><category domain="https://bytevagabond.com/categories/industry-insights">Industry Insights</category><content:encoded><![CDATA[Web development has become more sophisticated, but understanding the fundamentals and different approaches to reactivity can help you write better, more efficient applications. This post will explain how web engines work, explore various reactivity models in different frameworks, and provide performance tips specifically for Angular developers.
1. How Web Engines Work ELI5: The DOM and the JS Event Loop The DOM The Document Object Model (DOM) is essentially a tree structure representing your web page. Each HTML element is a node in this tree, and JavaScript allows us to manipulate these nodes to dynamically update the content, structure, and style of the web page.
Example of the document object modles render tree The JavaScript Event Loop The JavaScript event loop is the mechanism that allows JavaScript to perform non-blocking operations. JavaScript is single-threaded, meaning it can only do one thing at a time. The event loop manages this by placing tasks (like user events, HTTP requests, etc.) in a queue and processing them one by one.
Visualization of the js call stack and event loop Components of the Event Loop and How It Works Call Stack: The call stack is where JavaScript keeps track of function calls. When a function is invoked, it gets added to the top of the stack, and when it finishes, it gets removed from the stack. JavaScript starts by executing all the initial code in the call stack. Web APIs: These are browser-provided APIs like setTimeout, DOM events, fetch, etc. They handle tasks asynchronously. When an async operation like setTimeout is encountered, it is handed off to the Web APIs. Callback Queue: This is a queue of functions that are ready to be executed once the call stack is clear. When an asynchronous operation completes, its callback function is placed in this queue. Event Loop: The event loop constantly checks if the call stack is empty and if there are any tasks in the callback queue. If the stack is empty, it moves the first task from the queue to the stack, allowing it to be executed. Understanding this flow is crucial for grasping how frameworks manage reactivity. I also highly recommend watching this talk which goes more into detail &amp;ldquo;What the heck is the event loop anyway?&amp;rdquo;
2. Simplifying Web Development: Frameworks and Their Different Approaches to Reactivity Reactivity in web development refers to the automatic update of the UI when the application&amp;rsquo;s state changes. Web frameworks employ various reactivity models to manage UI updates efficiently. These approaches can be categorized along a spectrum from coarse-grained to fine-grained reactivity.
JS frameworks on the reactivity scale Coarse-Grained Reactivity This approach executes significant portions of application code to determine DOM updates. It&amp;rsquo;s easier to implement but potentially less efficient.
React and Angular lean towards this approach, re-executing components and checking references on state changes, though their underlying mechanisms differ. (Angular&amp;rsquo;s approach will be detailed in the next section.) Middle Ground Vue balances coarse and fine-grained reactivity using a runtime-based system with Refs (similar to Signals). It re-runs components on state changes but allows for more granular updates through its provide/inject API. Svelte also fits here, using a compiler-based approach for .svelte files and a separate store mechanism for external reactivity. Its compiler optimizes updates efficiently, often re-executing less code than coarser-grained frameworks. Fine-Grained Reactivity This approach updates only specific DOM nodes needing changes, without re-executing large code portions. It&amp;rsquo;s more efficient but can be more complex to implement.
Solid and Qwik represent this approach, directly tying reactive signals to DOM updates. Key Concepts Understanding the core mechanisms behind reactive frameworks helps clarify their design choices and trade-offs. Three fundamental concepts shape the landscape of modern web development:
Virtual DOM vs. Direct DOM Manipulation React and Vue utilize a Virtual DOM approach, creating an in-memory representation of the UI for efficient comparison and selective updates. In contrast, Svelte and Solid employ direct DOM manipulation, compiling components to optimize DOM updates at runtime. Angular uses a hybrid approach with its Incremental DOM. While Virtual DOM provides a layer of abstraction that simplifies development, direct manipulation can offer performance benefits at the cost of increased complexity.
Virtual DOM diffing mechanism visualized Reactivity Primitives Frameworks employ various primitives to track and propagate state changes:
Values: Basic state representation, requiring comparison for updates. Observables: Used in Angular and Svelte, enabling complex data flows. Signals/Refs: Modern approaches in Vue, Qwik, and Solid allow fine-grained updates. Angular v17 and Svelte v5 also adopted signals, suggesting signals might become a native JS API (I discuss these developments here: Bet on the Web - Why the future of applications is web based). Compilation vs. Runtime Reactivity Svelte and Solid use compilation-based approaches, optimizing reactivity at build time and potentially reducing runtime overhead. React, Vue, and Angular primarily handle reactivity during execution, offering greater flexibility for dynamic content at the expense of some performance. Qwik combines both approaches, using partial hydration to balance build-time optimization and runtime adaptability. This distinction highlights the trade-off between compile-time optimization and runtime flexibility.
These concepts underpin the design philosophies of modern frameworks, influencing their approaches to performance, developer experience, and application architecture.
Conclusion In web development, choosing the right framework and reactivity model is crucial. Coarse-grained reactivity is simpler to implement but may lead to inefficiencies, while fine-grained reactivity offers better performance but requires deeper understanding.
As Miško Hevery argues, a broken app is easier to fix because its issues are obvious and straightforward to address. In contrast, a slow app is harder to optimize as it involves multiple complex adjustments, which can sometimes lead to additional problems.
Source: Exploring Reactivity Across Various Frameworks
3. Bonus: Best Performance Practices in Angular&amp;rsquo;s Reactivity Model Angular&amp;rsquo;s reactivity model is designed to provide efficient and effective ways to manage data updates and UI rendering. To make the most out of Angular&amp;rsquo;s capabilities, developers should adopt certain performance best practices. Here are some key strategies to optimize your Angular applications:
Zone.js Zone.js patches browser apis to detect changes and make the app reactive Zone.js plays a crucial role in Angular’s reactivity model. It monkey patches Web APIs to detect asynchronous tasks and then triggers change detection throughout the application. This approach, while convenient, can be inefficient because it may cause unnecessary re-checking of component properties and rerendering of both parent and child components.
Zoneless Future Angular is moving towards a zoneless architecture. The elimination of Zone.js means fewer memory overheads and less main thread usage. Additionally, it leads to smaller bundle sizes. Developers can prepare for this transition by adopting practices that ease the shift from Zone.js to a more efficient and fine-grained reactivity model.
Change Detection on Push Implementation Change Detection Strategy: OnPush is a performance optimization technique that can be implemented right away for immediate benefits. By using ChangeDetectionStrategy.OnPush, Angular will only check the component and its children when explicitly triggered by an input change or an Observable/Subject emit. This drastically reduces redundant change detection cycles.
@Component({ selector: &amp;#39;app-example&amp;#39;, changeDetection: ChangeDetectionStrategy.OnPush, template: `...` }) export class ExampleComponent { } Future-Ready Adopting OnPush now will make it easier to switch to Angular’s zoneless architecture in the future. Once Zone.js is removed, Angular is expected to be on par with frameworks like Svelte and Solid in terms of performance.
Signals Signals represent Angular’s new fine-grained reactivity model. Unlike traditional change detection, signals provide a more precise way to update the DOM structure. This model allows for efficient, targeted updates, reducing unnecessary renders and significantly improving application performance. Angular signals guide
Track By Using trackBy in ngFor is another essential practice for enhancing performance. This function helps Angular identify items in a list that have changed, allowing it to update only the necessary elements instead of re-rendering the entire list. This leads to faster, more efficient updates, particularly with large datasets.
@Component({ selector: &amp;#39;app-list&amp;#39;, template: ` &amp;lt;li *ngFor=&amp;#34;let post of posts; trackBy:identify&amp;#34;&amp;gt;{{post.title}}&amp;lt;/li&amp;gt; ` }) export class ListComponent { post:[]; identify(index, item) { return post.id; } } Control Flow: Defer Tag Angular&amp;rsquo;s new template syntax introduces powerful features such as the @defer directive and an enhanced @for loop:
Enhanced @for loop: It includes a faster diffing algorithm and built-in support for trackBy, further optimizing list rendering. @defer directive: This allows elements to be loaded and rendered only when needed, such as when they come into the viewport. This lazy-loading mechanism conserves resources and improves initial load times. &amp;lt;div class=&amp;#34;flex-container&amp;#34;&amp;gt; @for (post of this.posts;track post.id) { @defer (on viewport){ &amp;lt;app-card [post]=&amp;#34;post&amp;#34; class=&amp;#34;flex-item&amp;#34;/&amp;gt; } @placeholder (minimum 2000ms) { &amp;lt;div&amp;gt;Loading..&amp;lt;/div&amp;gt; } } &amp;lt;/div&amp;gt; By incorporating these practices, your Angular applications will not only perform better but also be better prepared for future enhancements in the framework. Embracing these strategies ensures efficient change detection, targeted updates, and reduced resource consumption, positioning your application at the forefront of modern web development.
]]></content:encoded></item><item><title>Calisthenics Workout Plan</title><link>https://bytevagabond.com/post/calisthenics-workout-plan/</link><pubDate>Thu, 09 May 2024 22:43:59 UT</pubDate><guid>https://bytevagabond.com/post/calisthenics-workout-plan/</guid><description>💪🏻 The main goal of this plan is to focus on skill development and strength building by having a holistic push, pull, legs and abs split. Training Days (each workout is about 1 hour): 4 Day version: Legs + Abs - Rest - Push - Rest - Pull - Rest - Fullbody 5-6 Day version: Push - Pull - Legs + Abs - Rest - Push - Pull - Legs + Abs Rest between reps: 1:30-2 Minutes Bonus: Full body workout plan Have a look at the Progressions library at the bottom of this page.</description><category domain="https://bytevagabond.com/categories/fitness">Fitness</category><category domain="https://bytevagabond.com/categories/calisthenics">Calisthenics</category><content:encoded><![CDATA[ 💪🏻 The main goal of this plan is to focus on skill development and strength building by having a holistic push, pull, legs and abs split. Training Days (each workout is about 1 hour): 4 Day version: Legs &#43; Abs - Rest - Push - Rest - Pull - Rest - Fullbody 5-6 Day version: Push - Pull - Legs &#43; Abs - Rest - Push - Pull - Legs &#43; Abs Rest between reps: 1:30-2 Minutes Bonus: Full body workout plan Have a look at the Progressions library at the bottom of this page. Push Exercise Name Sets Reps / Time Band Shoulder Warm-up Routine 3 30 Reps Wrist Warm-up 1 30sec Push-Ups Warm-Up 1 50 Reps ——— —— —— Planche Progression (curr. Adv. Tucked Planche) 3 6-10sec Planche Raise Progression (curr. Adv. Tuck Planche Raise) 3 6-8 Reps Wall Handstand Push Up Progression (curr. Wall Handstand Push Up) 3 6-8 Reps Planche Push Up Progression (curr. Adv. Tucked Planche Push Ups) 3 4-6 Reps L-Sit 2 30-50sec Dips 3 8-12 Reps ——— —— —— Stretch (Toe Touch, Hanging, etc.) 1 2-3min Pull Exercise Name Sets Reps / Time Band Shoulder Warm-up Routine 3 30 Reps Squats Warm-Up 1 30 Reps Wrist Warm-up 1 30sec ——— —— —— Front Lever Progressions (curr. One Leg Advanced Front Lever) 3 6-10sec Front Lever Row Progressions (curr. Advanced Tuck Front Lever Rows) 3 6-8 Reps Pull Ups 3 6-8 Reps Muscle Ups 2 4-6 Reps Row 3 8-12 Reps One Arm Pull Up Progression 2 4-6 Reps Chin Ups 3 8-12 Reps ——— Stretch (Toe Touch, Hanging, etc.) 1 2-3min Legs &#43; Abs Exercise Name Sets Reps / Time Band Shoulder Warm-up Routine 3 30 Reps Burpee Warm-Up 1 30 Reps Wrist Warm-up 1 30sec ——— —— —— Pistol Squat Progression (curr. Self-Assisted Pistol Squat) 4 6-8 Reps Reverse Nordic Curl Progression (curr. Band-Assisted Reverse Nordic) 3 8-12 Reps Human Flag 2x2 6-10sec Hollow Body Hold 2 60sec L-Sit 2 30-40sec Calf Raises 2 Failure ——— —— —— Stretch (Toe Touch, Hanging, etc.) 1 2-3min Bonus: Full Body Exercise Name Sets Reps / Time Band Shoulder Warm-up Routine 3 30 Reps Squats Warm-Up 1 30 Reps Wrist Warm-up 1 30sec ——— —— —— Planche Progression (curr. Adv. Tucked Planche) 2 6-10sec Front Lever Progressions (curr. One Leg Advanced Front Lever) 3 6-10sec Planche Push Up Progression (curr. Adv. Tucked Planche Push Ups) 3 4-6 Reps Pull Ups 3 6-8 Reps Dips 3 8-12 Reps Pistol Squat Progression (curr. Self-Assisted Pistol Squat) 3 6-8 Reps ——— —— —— Stretch (Toe Touch, Hanging, etc.) 1 2-3min Progressions Planche Progression
Tucked Planche Adv. Tucked Planche Super Adv. Tucked Planche One leg Planche Pike Straddle Planche Straddle Planche Almost Full Planche Full Planche Wall Handstand Progression
Wall Handstand Push Up Negatives Back-To-Wall Handstand Push Ups Partial Wall Handstand Push Up Wall Handstand Push Ups Deep Wall Handstand Push Ups Full ROM Wall Handstand Push Ups Pistol Squad Progression (with alternatives per progression)
Foot-Assisted Skater Squat / Skater Squat High-Seated Pistol Squat / Low-Seated Pistol Squat Self-Assisted Pistol Squat / Band-Assisted Pistol Squat Low-Elevated Pistol Squat / High-Elevated Pistol Squat Negative Pistol Squat / Heel-Elevated Pistol Squat / Pistol Squat SL Romanian Deadlift Progression
Bodyweight Romanian Deadlift Overhead SL Romanian Deadlift SL Romanian Deadlift Straight SL Romanian Deadlift Forward SL Romanian Deadlift Ring Hamstring Curl Progression
High-Elevated Hamstring Curl Medium-Elevated Hamstring Curl Low-Elevated Hamstring Curl Medium-Elevated Single Leg Hamstring Curl Low-Elevated Single Leg Hamstring Curl Reverse Nordic Curl Progression
Band-Assisted Negative Reverse Nordic Band-Assisted Reverse Nordic Negative Reverse Nordic Reverse Nordic Front Lever Progressions
Tuck Front Lever Half Tuck Front Lever Adv. Tuck Front Lever One Leg Front Lever One Leg Advanced Front Lever Almost Full Front Lever Straddle Front Lever Front Lever Front Lever Row Progressions
Tuck Front Lever Rows L Front Lever Rows Advanced Tuck Front Lever Rows One Leg Tuck Front Lever Rows Advanced One Leg Tuck Front Lever Rows Straddle Front Lever Rows Front Lever Rows Negative One Arm Pull Up Progressions
Assisted Top Hold One Arm Pull Up Top Hold One Arm Pull Up Assisted Negative One Arm Pull Up Negative One Arm Pull Up Paused Negative One Arm Pull Up One Arm Pull Up Progressions
5-1 Finger decrease
Pistol Squat Progression
Foot-Assisted Skater Squat Skater Squat High-Seated Pistol Squat Low-Seated Pistol Squat Self-Assisted Pistol Squat Band-Assisted Pistol Squat Low-Elevated Pistol Squat High-Elevated Pistol Squat Negative Pistol Squat Heel-Elevated Pistol Squat Pistol Squat Nordic Curl Progression
Band-Assisted Negative Reverse Nordic Band-Assisted Reverse Nordic Negative Reverse Nordic Reverse Nordic ]]></content:encoded></item><item><title>Bet on the Web - Why the future of applications is web based</title><link>https://bytevagabond.com/post/bet-on-the-web/</link><pubDate>Mon, 15 Jan 2024 10:50:00 UT</pubDate><guid>https://bytevagabond.com/post/bet-on-the-web/</guid><description>In my professional journey, I&amp;rsquo;ve observed a consistent trend: the shift towards web-based application development. This shift away from native OS-specific development is not just a fleeting change, but a significant movement in the tech world. In this article, I&amp;rsquo;ll share my insights on why this trend is emerging and its positive implications.
You Cannot Stop the Web Back in 2007, with the launch of the iPhone, Steve Jobs introduced the idea of creating web applications that look and feel like native apps (source).</description><category domain="https://bytevagabond.com/categories/web-development">Web Development</category><category domain="https://bytevagabond.com/categories/technology-trends">Technology Trends</category><category domain="https://bytevagabond.com/categories/digital-innovation">Digital Innovation</category><category domain="https://bytevagabond.com/categories/software-engineering">Software Engineering</category><category domain="https://bytevagabond.com/categories/industry-insights">Industry Insights</category><content:encoded><![CDATA[In my professional journey, I&amp;rsquo;ve observed a consistent trend: the shift towards web-based application development. This shift away from native OS-specific development is not just a fleeting change, but a significant movement in the tech world. In this article, I&amp;rsquo;ll share my insights on why this trend is emerging and its positive implications.
You Cannot Stop the Web Back in 2007, with the launch of the iPhone, Steve Jobs introduced the idea of creating web applications that look and feel like native apps (source). Fast forward to 2015, the term Progressive Web App (PWA) was coined by Chrome developer Alex Russell and designer Frances Berriman, advocating for building better experiences across devices and contexts within a single codebase (source).
Despite this vision, major players like Apple, Google, and Microsoft initially built closed ecosystems with app stores, taking significant cuts from app revenues. However, browser engines were also evolving, becoming robust enough to power interfaces in cutting-edge applications like SpaceX&amp;rsquo;s Dragon capsule (source).
JS to the moon and beyond 🚀 Even vehicles like Tesla cars have started exploring web-based applications (source). This is further evidenced by operating systems like KaiOS and LG webOS, which utilize web technologies (source).
Recently, &amp;lsquo;mini apps&amp;rsquo; have surged in popularity, leading to a new W3C standard proposal (source). These are apps within apps, like those found in WeChat, offering a range of services from food delivery to scheduling appointments.
Major tech companies are now embracing this trend. Google has started allowing PWAs in the Google Play Store, and Microsoft supports PWAs with tools like PWABuilder. Apple, traditionally slower to adopt such trends, has significantly increased its investment in developing Safari and its WebKit engine to enhance PWA support. This commitment is evident in the recent addition of web push notifications in iOS 16.4, a feature long available in other engines (source). For those interested in the nuances of web app capabilities across different platforms, following Maximiliano Firtman on Twitter is highly recommended, as he&amp;rsquo;s an established expert in this domain.
From a developer&amp;rsquo;s perspective, the impact of Apple&amp;rsquo;s increased focus on WebKit is tangible. We&amp;rsquo;re witnessing a decline in Safari-specific bugs and edge cases, a welcome change for web developers. This shift in focus may be partly attributed to the EU&amp;rsquo;s antitrust lawsuits, which have pressured big tech players, including Apple, to open up their platforms. In response, Apple has positioned its browser as capable of installing web apps, suggesting that they already offer multiple app installation sources. However, developers familiar with iOS webviews recognize that all browsers in the Apple App Store, including Chrome and Firefox, are essentially Safari under different guises, as WebKit is the only browser engine permitted on iOS. This ongoing battle between the EU and Apple could lead to significant changes, hopefully benefiting consumers in the long run.
APIs, APIs, APIs The capabilities of web applications have grown exponentially, far surpassing the era of Web 2.0&amp;rsquo;s text, forms, and rich media. Today&amp;rsquo;s web can handle complete professional workflows within the browser. Examples like Photoshop (source), Figma, and Google Meet with AI capabilities for virtual backgrounds demonstrate the advanced functionalities now possible. Furthermore, online streaming gaming services like GeForce Now are redefining what we can achieve entertainment wise with web technologies.
The addition of powerful APIs to browser engines has been a game-changer. Websites can now access a range of APIs from native devices, as showcased at whatpwacando.today. The use of binary executables in WebAssembly (examples include FFmpeg WASM, the Doom game at silentspacemarine.com, and even running an entire Windows 98 operating system in a browser at copy.sh/v86) alongside multithreading with SharedArrayBuffer (source) illustrates the breadth of what can now be accomplished in a browser context, rivaling native platform capabilities.
Windows 98 running in the browser 🤯 Emerging programming paradigms in web development are creatively addressing the performance issues traditionally associated with Single Page Applications (SPAs), making up a large part of the current web apps. QwikJS is at the forefront, enhancing loading efficiency by breaking down code into smaller segments and leveraging service workers for timely execution, thus reducing network load and easing the main thread&amp;rsquo;s burden. Concurrently, the integration of the View Transitions API and hypermedia in Multi Page Applications (MPAs) infuses them with SPA-like dynamism, thereby merging the best aspects of both SPA and MPA architectures. This convergence, reminiscent of the concept of &amp;ldquo;Carcinisation&amp;rdquo; represents a significant stride in creating more efficient and user-friendly web applications, marking a new era in web development.
Furthermore, Web APIs, in contrast to operating system APIs, offer more standardization, leading to greater longevity and viability for web-based solutions. The open-source nature and inherent security of browser sandboxes may also provide a safer environment compared to native apps.
While there are still use cases where native development is preferable, such as in gaming, this gap is narrowing. The continuous enhancement of browser engines, leveraging technologies like WebGL and the File Storage API, is paving the way for features like instant loading of AAA games, further blurring the lines between native and web-based capabilities.
Flipside of the Coin Despite the exciting advancements in web technology, there are significant challenges that cannot be ignored. The near-monopoly of Chrome poses a risk to the diversity and openness of the web. Dominance by a single browser can lead to a lack of innovation and the potential imposition of proprietary standards, which may not align with the broader interests of the web community.
The decline in Firefox usage is also concerning. Firefox, known for its commitment to privacy and open standards, has traditionally been a counterbalance to the larger players. Its reduced market share could lead to a less diverse and competitive browser ecosystem, which may negatively impact the development of web standards.
WebKit, the engine behind Safari, presents its own set of challenges, particularly due to Apple&amp;rsquo;s closed ecosystem approach. This can lead to compatibility issues, especially for developers trying to create cross-platform web applications. Projects like the Ladybird browser illustrate the complexity and effort required to develop alternative browsers that can compete with established ones.
Yes, I Know It&amp;#39;s Bold: Web Monopolies Overshadowing Digital Liberty Another critical issue is the lack of emphasis on responsive designs by some businesses, which leads to suboptimal user experiences, especially on mobile devices. While web technologies are fully capable of supporting responsive and adaptive user interfaces, the responsibility falls on developers and businesses to prioritize and implement these practices.
Resume Reflecting on my career in IT and extensive experience with hybrid app technologies, the direction of the industry is becoming increasingly clear to me. The enhanced capabilities of web applications, changing attitudes of major technology players towards web technologies, and the gradual shift towards web-based solutions are indicative of a significant transformation in application development.
This trend suggests that web-based applications may soon become the standard, driven by their cross-platform compatibility, ease of access, and continuous improvements in performance and capabilities. The move towards the web as the primary platform for application development represents a fundamental shift in how we think about software, its development, and its deployment.
However, this shift also brings challenges that need to be addressed. Ensuring browser diversity, maintaining open standards, and prioritizing user experience across various devices are crucial for the sustainable growth of web technologies. The future of application development looks promising, but it requires a concerted effort from developers, businesses, and browser vendors to realize its full potential.
]]></content:encoded></item><item><title>Self-Sufficient Gardening &amp; Permaculture 101</title><link>https://bytevagabond.com/post/self-sufficient-gardening-permaculture-101/</link><pubDate>Wed, 19 Apr 2023 10:50:00 UT</pubDate><guid>https://bytevagabond.com/post/self-sufficient-gardening-permaculture-101/</guid><description>This article is based on my personal experience and in part on the teachings of the excellent Don Giardino. If you want to learn more about gardening and permaculture, I highly recommend his book and his website. Introduction: The Art of Reading Nature&amp;rsquo;s Signals Having a green thumb isn&amp;rsquo;t just about having some magical ability to grow plants. It&amp;rsquo;s about being an excellent observer of nature and learning to understand the signals that plants give us.</description><category domain="https://bytevagabond.com/categories/society">Society</category><category domain="https://bytevagabond.com/categories/permaculture">Permaculture</category><category domain="https://bytevagabond.com/categories/gardening">Gardening</category><content:encoded><![CDATA[ This article is based on my personal experience and in part on the teachings of the excellent Don Giardino. If you want to learn more about gardening and permaculture, I highly recommend his book and his website. Introduction: The Art of Reading Nature&amp;rsquo;s Signals Having a green thumb isn&amp;rsquo;t just about having some magical ability to grow plants. It&amp;rsquo;s about being an excellent observer of nature and learning to understand the signals that plants give us. From recognizing signs of water or nutrient deficiency to detecting pests and diseases, it&amp;rsquo;s essential to be vigilant and attentive to prevent the use of pesticides and insecticides and to ensure the healthy growth of your plants.
Starting Seedlings While starting seedlings is essential for some crops, it&amp;rsquo;s important to know the right time for each type. Remember, though, that buying seedlings from a local farmer&amp;rsquo;s market is also a good option if you find starting seeds to be challenging.
Examples of Seed Starting Timelines Physalis (Ground Cherry): Start seeds in mid-February to have ripe fruits before frost. Chilies, Eggplants, and Peppers: Starting seeds early (mid-February) provides better results. Tomatoes: Start seeds in mid-March and transplant to the greenhouse by May 1st. For outdoor tomatoes, give them their first nettle liquid feed in their pots and transplant by May 15th. Squash and Melons: Start seeds in mid-April and transplant to the garden by May 1st, providing a small mobile tunnel to protect them from potential frost. Beans and Corn: Wait until mid-May to start seeds or transplant, as they prefer warmer temperatures. Onions: Start seeds in January, but be prepared for a 50% survival rate without heating mats and lamps. Alternatively, use onion sets for an easier start, but be aware that they may not store as well as those grown from seed. To find the perfect timing for your climate and location, keep records of your seed-starting dates, germination times, and transplanting dates. After a few years, you&amp;rsquo;ll have a better understanding of the ideal schedule for your garden. At the end of this article you will find a full cultivation calendar.
Time Investment for Self-Sufficiency The time investment for self-sufficiency depends on the size of the area being cultivated. For example, a small 100 sqm vegetable garden requires about 1 hour of care per day on average, with some days requiring up to 3 hours during planting season. However, there will be days when no work is needed. Processing the harvest also takes time and effort, which increases as more produce is generated. In a year, around 365 hours can be spent maintaining a 100 sqm garden and processing the produce. Larger gardens require more time for care, harvest, and processing, with a 1000 sqm garden potentially becoming a full-time job that can also generate income.
Preserving Produce Preserving food naturally is an essential skill for self-sufficiency. Some techniques include canning, freezing, fermenting, drying, preserving in salt or saltwater solutions, preserving in oil, preserving in vinegar, preserving in alcohol, preserving with sugar, and vacuum sealing (rarely). These methods have been passed down through generations, allowing for experimentation and adaptation to new crops and situations.
Different containers can be used for preserving food, such as mason jars, weck jars with stainless steel clamps, and bottles with swing-top closures. Choosing the right container and maintaining proper hygiene during the preservation process is crucial to ensure the produce remains safe to consume.
Preservation Techniques Canning is a practical and sustainable method for preserving food, as it requires no additional energy input after the initial process. Other methods, like using an oven or dehydrator, may require some energy input but can still be made more eco-friendly by using solar power or air-drying when possible. Proper labeling, storage, and regular checks of preserved food help ensure food safety and quality.
In conclusion, self-sufficiency in food production requires a significant time investment, proper planning, and a wide range of preservation techniques. However, the rewards include greater control over one&amp;rsquo;s food supply, reduced reliance on external sources, and the satisfaction of providing for oneself and one&amp;rsquo;s family.
Agriculture and Its Cultivation Models Agriculture encompasses various cultivation methods, including monoculture, mixed-culture, and permaculture. Indoor gardening and aquaponics are not discussed here, as they are more akin to industrial food production facilities than traditional agriculture. Each of these agricultural methods has its advantages and disadvantages, with some being praised for promoting biodiversity and ecological sustainability, while others are criticized for their potential negative impacts on the environment. This article provides an overview of different cultivation methods and their key characteristics, with the aim of understanding which method can be considered ideal and sustainable.
Monoculture, Mixed-Culture, and Permaculture In a typical garden, one might find elements of all three cultivation methods. For example:
Monoculture: Potatoes may take up a significant area, planted alone without being mixed with other crops. Mixed-culture: Onions and carrots can be planted together, as they support each other in repelling pests. Permaculture: Perennial crops like artichokes, green asparagus, and Jerusalem artichokes can provide yields for years in the same location. Additional Noteworthy Techniques Conventional Regenerative Agriculture: This method aims to help the soil recover more effectively, using humus-enriching crops and green manure to maintain balanced nutrient levels and improve water retention. It often involves direct seeding in monocultures and may employ herbicides like glyphosate.
Ecological Regenerative Agriculture: The ecological variant of regenerative agriculture is primarily practiced on small-scale plots. It uses no-dig techniques that minimize soil disturbance and promote natural processes, resulting in a more sustainable approach.
Biodynamic Agriculture: This approach focuses on self-regulation, with the number of livestock being kept in balance with the land&amp;rsquo;s capacity to provide feed. The resulting manure helps maintain soil fertility, creating an ecologically sensible closed-loop system.
Biointensive Agriculture: A relatively new method, primarily used in the market farming scene in the US, that achieves high yields on small plots through well-structured crop rotation and closely spaced rows.
Agroforestry: This concept integrates trees into agricultural land, promoting plant diversity and insect populations while stabilizing the soil&amp;rsquo;s water balance.
Framework for Good Permaculture To achieve sustainable permaculture, certain principles should be followed:
Complete avoidance of chemical pesticides, insecticides, fungicides, and herbicides. Use of biodynamic, closed-loop nutrient management with organic compost and herbal extracts. Minimal soil disturbance and selective use of no-dig techniques. Planting specific crops to support insects and pollinators. Use of self-produced or certified organic, open-pollinated seeds. Avoidance of F1 hybrids and genetically modified plants or seeds. Watering with rainwater or pond water, with tap water reserved for extreme droughts. Maintenance of natural habitats for insects and wildlife. By adhering to these principles, sustainable permaculture can be achieved, promoting ecological balance and biodiversity while providing for our food needs.
Living Soil == Healthy Soil Defining a healthy soil can be approached in several ways, such as measuring humus content, the number of microorganisms, or the nutrients it contains. There are numerous scientific methods to determine if soil is healthy or not. A healthy soil is teeming with life, from the visible role of earthworms to trillions of invisible microorganisms. A soil rich in microorganisms can be considered healthy. Another important criterion is the soil&amp;rsquo;s ability to retain water. Thus, the number of organisms and water-retention capacity serve as good indicators of soil health.
A simple yet effective way to maintain healthy soil is proper care and treatment, following nature&amp;rsquo;s example. Establishing a compost cycle ensures a significant amount of biomass from the garden is returned to the soil. This process increases humus content, improves water retention, and supports nutrient availability for plants. Compost acts as an organic fertilizer, providing not only primary nutrients like potassium and nitrogen but also essential trace elements that are slowly released.
To cultivate healthy soil, avoid using pesticides, insecticides, fungicides, and herbicides, as they can harm fungi, microorganisms, insects, and plants. Also, refrain from using mineral fertilizers, which can cause nutrient imbalances and negatively affect the pH level. Focus on maintaining a pH level between 6-7 for optimal nutrient uptake by plants, adjusting with natural substances like eggshells for calcium if necessary.
Minimizing soil disturbance is crucial, as it maintains the natural balance of soil life and nutrients, which is essential for long-term soil health. Traditional plowing and tilling methods can harm soil life and release greenhouse gases. Instead, opt for mulching techniques to preserve soil structure and support natural processes.
Water quality also plays a significant role in soil health. In some areas, nitrate levels in surface and groundwater can be problematic due to conventional agriculture practices. To ensure healthy soil, prioritize water quality and explore alternative water sources if necessary.
In conclusion, maintaining healthy soil is a matter of proper care, following nature&amp;rsquo;s lead, and avoiding harmful substances. By preserving soil life and structure, you can cultivate a thriving garden that benefits both people and the environment.
Natural Fertilizers &amp;amp; Herbal Manure Recipes The restrictive conditions of permaculture also mean that fewer fertilizing options are available. However, these options are enough to deliver top results without resorting to synthetic or mineral fertilizers. My fertilizers have a huge advantage &amp;ndash; they are created in a closed loop using only uncontaminated biomass from my garden. This makes them completely free and accessible to everyone.
Here are the fertilizing options you can use in your vegetable garden. Only fertilizers marked with (*) do not belong to the closed-loop cycle and must be purchased.
Compost Compost is made from garden waste and green cuttings from my own garden. It also includes eggshells from my chickens, organic coffee grounds, and layers of herbs or bee pasture cuttings every 20 cm. Avoid adding cooked or greasy foods to the compost as they attract rodents. Seeds from mature fruits such as tomatoes, physalis, or pumpkins should not be added to the compost either, as they can still germinate even after two years.
Chicken Manure Chicken droppings are spread over almost the entire growing area during winter months. Chicken manure should never be used directly as a fertilizer, as it is very strong and can cause damage. Instead, let it rest for two weeks and mix it with 50% finished compost to fertilize crops. Chicken manure is an excellent fertilizer for tomatoes and peppers, especially during the flowering stage. If you don&amp;rsquo;t have chickens, organic guano is a good alternative.
Nettle Manure Prepare nettle manure as a nitrogen fertilizer from mid-March to mid-April. Mix 5 kg of chopped nettles with 50 to 75 liters of water in a large bucket and let it sit for 10 to 14 days. Stir daily and make sure it foams; when it stops foaming, the manure is ready to use. Strain the mixture and dilute 1 liter of manure with 10 liters of rain or pond water. Apply the diluted manure to plants like tomatoes, zucchini, peppers, pumpkins, melons, and cucumbers sparingly and only three times over a 30-day period.
Cabbage Manure Begin producing cabbage manure with the first summer cabbage in August. It has a higher potassium and phosphorus content than nettle manure and is used as a booster for high-demanding plants. The production process is identical to that of nettle manure. Dilute cabbage manure at a 1:10 ratio and use it sparingly on new cabbage plants such as Brussels sprouts, kale, or broccoli every 10 days for a 30-day period.
Remember to apply both nettle and cabbage manure only to the soil, as contact with the leaves may cause burn marks.
Tomatentriebjauche Tomatentriebjauche is made from excess tomato shoots, leathery lower leaves, and small side shoots. It is prepared in the same way as nettle manure and used specifically for tomatoes. Starting from late July, a 1:10 diluted Tomatentriebjauche is given to each tomato plant every 10 days, providing 0.5 liters per plant. This method helps prevent nutrient deficiencies such as blossom end rot.
Bokashi Bokashi is a process of incorporating food waste directly into the soil. Unlike composting, bacteria and microorganisms break down the waste, resulting in a better soil structure and nutrient content. Bokashi is best suited for high-consuming plant hotspots, such as pumpkins, cucumbers, or zucchinis, but is not suitable for large areas.
Terra Preta Terra Preta, or &amp;ldquo;black earth,&amp;rdquo; is a South American soil amendment made from burnt wood and urine. It can be created by burning wood until a white ash layer forms and then extinguishing it with a high-nitrogen liquid such as human urine or manure. Terra Preta is an effective way to remineralize the soil every 10 years.
Ackerschachtelhalmbrühe Ackerschachtelhalmbrühe is not a fertilizer, but a strengthening agent. Its high silica content strengthens cell walls, reducing the chances of pest and fungal infestations. It is made from fresh horsetail, boiled and strained, and then sprayed on the plants once during the growth phase.
Beech Leaves Beech leaves can be collected and used as mulch for tomatoes and other crops. They decompose slowly, protecting the soil and reducing evaporation. The mulching process also reduces the need for watering and can improve the taste of fruits like tomatoes.
Pond Water Pond water, rich in microorganisms and nutrients, can be used to dilute plant manure and water seedlings. It can promote more robust growth in young plants compared to using rainwater alone.
Horn Shavings Horn shavings, which must be purchased in organic quality, are high in nitrogen and best used as autumn and spring fertilizers. Be cautious about using them later in the season, as they may interfere with potassium uptake, which plants need during the flowering phase.
Rock Dust Rock dust is powdered rock that can improve the water-holding capacity of the soil. It can also help prevent blossom end rot in tomatoes and peppers. Finely sifted rock dust can be used as a natural fungicide against powdery mildew.
Lava Mulch Lava mulch, inspired by the lava-rich soils of Sicily, is rich in minerals and trace elements. It can be used for cultivating Mediterranean herbs, which become more aromatic when grown in lava mulch. Lava mulch also helps with weed control and drainage in garden beds.
Wood Chips Wood chips are primarily used for garden paths but will eventually decompose and form humus, contributing to soil fertility.
In Conclusion Throughout this exploration, we have discovered the potential of permaculture and self-sufficient gardening. With innovative techniques, smaller-scale gardening can yield impressive results, promote biodiversity, and pave the way for sustainable food production. While conventional and organic monoculture methods may currently satisfy our needs, it&amp;rsquo;s crucial to consider alternative approaches for a more resilient and eco-friendly future. Embracing change, innovation, and learning from nature, we can cultivate a greener, more sustainable world.
Cultivation calendar No legumes Use - storage cultivation 1 broad beans Fresh consumption, frozen, dried March-July Nov-Jun (annual) 2 sugar snap Eating fresh, frozen April – Au (annual) 3 wrinkled peas Fresh consumption, frozen, dried March – Jul (annual) 4 Borlotti bean Fresh consumption, frozen, dried May – Sep (annual) 5 Runner bean Bernese rural women Fresh consumption, frozen, dried May – Sep (annual) 6 Dry bean Red squint Fresh consumption, frozen, dried May – Aug (annual) 7 Dry Bean Black Ball Fresh consumption, frozen, dried May – Aug (annual) 8 bush beans Eat fresh, frozen, steamed and pickled in oil and pickled in salt water and vinegar May – Aug (annual) 9 lenses dried May – Aug (annual) 10 Chickpeas Eating fresh, dried May – Aug (annual) 11 soybeans Eating fresh, dried May – Sep (annual) No morning glory Use - storage cultivation 12 White Sweet Potato Fresh consumption – is stored May – Nov (annual) 13 water spinach fresh consumption May – Oct (annual) No cruciferous Use - storage cultivation 14 Kale Eating fresh, in a smoothie, kale chips Aug – Feb (annual) 15 savoy Eating fresh, frozen Aug – Feb (annual) 16 White Cauliflower Eating fresh, frozen April – Oct (annual) 17 Pointed cabbage &amp;amp; white cabbage Fresh consumption, frozen, boiled coleslaw, sauerkraut planned, as fertilizer or liquid manure Aug – Dec (annual) 18 Cauliflower fresh consumption Aug – Feb (annual) 19 broccoli Eating fresh, frozen May – Nov Aug – Jan (annual) 20 Red cabbage Pickled red cabbage with pieces of apple May – Sep (annual) 21 arugula fresh consumption April – Nov (annual) 22 field mustard fresh consumption April – Nov (annual) 23 Kohlrabi fresh consumption April – Nov (annual) 24 Black radish Fresh consumption, cough syrup Jul-Feb (annual) 25 turnip fresh consumption March – Jun Aug – Nov (annual) 26 radish fresh consumption April – Nov (annual) No nightshade family Use - storage cultivation 27 Potatoes (several varieties usually 5) Fresh consumption – is stored April – Sep (annual) 28 Tomatoes (more than 30 varieties / 15 per year periodically) Eating fresh, canned with tomato sauce, canned with tomato-pepper ketchup, dried April – Nov (annual) 29 horn peppers Eating fresh, frozen May – Oct (annual) 30 block peppers Eating fresh, frozen May – Oct (annual) 31 Lombard peppers Eat fresh, frozen, dried and ground into paprika powder May – Oct (annual) 32 Chilies (Pueblo, Thai, Habanero) Eat fresh, dried and ground into chili powder. Pickled in olive oil. Preserved with sweet and sour chili May – Oct (annual) 33 tree chili Eat fresh, dried and ground into chili powder. Cooked down with chili sweet and sour sauce. May – Oct (annual) 34 bell chili Fresh consumption, boiled down with chili sweet and sour sauce. May – Oct (annual) 35 Goji berries Dried all year round 36 Physalis / Andean berry Eating fresh, dried May – Oct (annual) 37 Peppino / pear melon Fresh consumption is stored May – Oct (annual) 38 Aubergines (3 types) Eating fresh, preserved in olive oil, preserved in vinegar May – Oct (annual) No amaryllis plants Use - storage cultivation 39 Onions (5 varieties) Fresh consumption, frozen, dried, is stored. cough syrup, onion powder March – Sep (annual) 40 shallots Fresh consumption, is stored March – Sep (annual) 41 garlic (3 kinds) Eating fresh, stored, dried, garlic powder March – Sep Nov – Jun (annual) 42 Leek Eating fresh, frozen Jun – Feb (annual) 43 chives Fresh consumption, herb butter perennial 44 chives Fresh consumption, herb butter perennial No umbellifers Use - storage cultivation 45 carrots (4 types) Eating fresh, frozen, stored in heaps, for chicken feed April – Feb (annual) 46 celery root Eating fresh, frozen, is stored in heaps of earth April – Jan (annual) 47 celery Eat fresh, frozen, juiced as a smoothie May – Nov (annual) 48 Caraway seeds Dried seeds as a spice and for tea mixtures, fresh for herbal schnapps 2 year old 49 tea fennel Dried seeds as a spice and for tea mixtures, young leaves for traditional dishes and herbal schnapps perennial 50 anise Dried seeds as a spice and for tea mixtures, fresh umbels for herbal schnapps. April – Sep (annual) 51 lovage Fresh consumption as a spice perennial 52 Parsely Eating fresh, frozen April – Jan (2 year olds) 53 fennel fresh consumption Jul – Dec (annual) 54 dill Fresh consumption, frozen, dried May – Oct (annual) 55 plantain Tincture against insect bites prepared with double grain Wild growth Perennial 56 buckhorn Tincture prepared as an expectorant with double grain Wild growth Perennial 57 nettle Fresh consumption of the young shoots and leaves as well as the seeds in the salad, dried, as fertilizer or liquid manure Wild growth Perennial 58 Vogelmire Fresh consumption in salads and as chicken feed Wild growth Perennial No mints Use - storage cultivation 59 oregano Dried as a seasoning and for chicken feed, herb vinegar, herb oil perennial 60 marjoram Dried as a spice, for herbal vinegar, April – Nov (annual) 61 rosemary Fresh and dried as a spice, herbal vinegar, tincture with double grain, herbal schnapps perennial 62 basil Fresh, frozen, dried and made into pesto May – Sep (annual) 63 savory Fresh, dried, preserved in oil with bush beans, herbal tea perennial 64 sage Fresh, dried, for tea blends, herbal schnapps perennial 65 catnip Dried for tea blends, herbal schnapps perennial 66 lemon balm Dried for tea blends, herbal schnapps perennial 67 Real thyme Fresh and dried as a spice, for perennial 68 Wild Thyme Fresh and dried as a spice, for tea mixtures, herbal schnapps perennial 69 lavender Dried for tea blends, scented sachets, herbal schnapps, perennial No rose family Use - storage cultivation 70 strawberry Eating fresh, boiled down to jam, for smoothies, for strawberry ice cream, fruit leather, rum pot perennial 71 blackberry Eating fresh, boiled down to jam, for smoothies, for blackberry ice cream, fruit leather, rum pot perennial 72 raspberry Fresh consumption, boiled down to jam, for smoothies, for raspberry ice cream, fruit leather, rum pot perennial 73 aronia berry Dried, Rumtopf perennial 74 roses Dried for tea blends and bath additives perennial No cleavers Use - storage cultivation 75 woodruff Freshly made into ice cream perennial No mints Use - storage cultivation 76 Sicilian mint Dried for tea, fresh in herbal schnapps perennial 77 Moroccan mint Fresh for tea and herbal schnapps perennial No sweet grasses Use - storage cultivation 78 dry corn Dried for cornmeal and crushed for chicken feed May – Oct (annual) 79 Blue popcorn corn Dried for cornmeal, popcorn and as chicken feed May – Oct (annual) 80 Bantam Sweetcorn Fresh and as chicken feed May – Sep (annual) No ginger family Use - storage cultivation 81 Ginger Fresh as a spice and tea blend April – Nov (annual) 82 turmeric Fresh and dried as a spice April – Nov (annual) No cucurbits Use - storage cultivation 83 Green zucchini Fresh canned in sweet and sour (ZuCuma) April – Oct (annual) 84 Zucchini 40 Giorni Fresh, frozen, perfect for Mediterranean minestrone as well as fried in thin slices, canned in sweet and sour April – Oct (annual) 85 Zucchini 7 Anni Fresh perennial 86 Hokkaido pumpkin Fresh, frozen, is stored, kernels are dried with salt April – Oct (annual) 87 Butternut squash Fresh, frozen, is stored April – Oct (annual) 88 Sicilian watermelon Fresh and processed into melon granita or juice May – Sep (annual) 89 Siberian watermelon Fresh and processed into melon granita or juice May – Sep (annual) 90 gherkin Fresh and pickled with Silesian cucumber bites, whole cucumbers in vinegar and salted cucumbers May – Aug (annual) 91 outdoor cucumber Fresh in salads or raw May – Sep (annual) 92 cucumber Fresh in salads or raw May – Sep (annual) 93 Inca cucumber Fresh as a stuffed pickle May – Sep (annual) 94 Mexican mini cucumber Fresh as a snack in the garden May – Sep (annual) 95 Jagulan (Herb of Immortality) Dried for tea blends perennial No beet plants Use - storage cultivation 96 chard Fresh, frozen, for pasta filling, chard patties, fermented stalks in salt water. chicken feed April – April (annual) 97 Beetroot Fresh, frozen raw in carpaccio or cooked. April – Aug Aug – Dec (annual) No daisy family Use - storage cultivation 98 mugwort Dried as a smoked product and as a spice perennial 99 sunflower Kernels are dried as a snack and for bird and chicken feed April – Sep (annual) 100 marigold Fresh, dried, in salads, for tea mixtures, herbal schnapps, tincture with double grain, preserved in oil, for wound ointments based on coconut fat or olive oil, insect food perennial 101 Jerusalem Artichoke Fresh as raw food or in salads, chicken feed perennial 102 Artichoke imperial Whole flowers filled, artichoke hearts fresh, made into meatballs, bitters. Leaves dried for tea blend (extremely bitter) perennial 103 Sicilian spiny artichoke Artichoke hearts fresh and frozen, the leaves are made into bitters perennial 104 lettuce Fresh in the salad March – Dec (annual) No daisy family Use - storage cultivation 105 chamomile Dried for tea blend, fresh for double grain tinctures, insect food Wild growth Perennial 106 Lollo rosso Fresh in salads and as chicken feed April – Nov (annual) 107 Lollo Bionda Fresh in salads and as chicken feed April – Nov (annual) 108 Various cut salads Fresh in salads and as chicken feed April – Nov (annual) 109 tagetes To repel wireworms, nematodes and whitefly. Dried as a spice April – Nov (annual) No foxtail plants Use - storage cultivation 110 spinach Fresh, frozen March – Jul Aug – Nov (annual) 111 amaranth Young leaves before flowering like spinach, dried seeds as a grain substitute, boiled with double the amount of water and mashed for chicken feed. April – Nov (annual) No honeysuckle family Use - storage cultivation 112 Lamb&amp;rsquo;s lettuce Fresh for salads March – Jul Aug – Nov (annual) No asparagus plants Use - storage cultivation 113 Green asparagus Fresh perennial 114 Blue Asparagus Fresh perennial No knotweed plants Use - storage cultivation 115 rhubarb Freshly made into jams perennial No Edible Flowers Use - storage cultivation 116 Nasturtium Fresh, dried, frozen as a spice, herbal tea, in salads, insect food, tincture with double grain, young seeds pickled in vinegar or salt as false capers May – Nov (annual) 117 Cornflower (various varieties) Fresh in salads, dried for tea mixtures, insect food April – Nov (annual) 118 borage Fresh in salads or as a garden snack, insect food April – Nov (annual) No Edible Flowers Use - storage cultivation 119 cosmea Fresh in salads or as a garden snack, insect food April – Nov (annual) 120 Seed mix 40 varieties Fresh in salads or as a garden snack, insect food April – Nov (annual) No trees &amp;amp; shrubs Use - storage cultivation 121 kiwi Fresh or cooked into jam perennial 122 passion fruit No yield yet perennial 123 plate peach Fresh perennial 124 Red Heaven peach Fresh and preserved in jars perennial 125 White Mulberry Fresh perennial 126 sweet chestnut Freshly roasted, boiled in the oven or in water, dried into chestnut flour, canned as a spread perennial 127 &amp;ldquo;&amp;ldquo;&amp;ldquo;Apple Trees Gala&amp;rdquo;&amp;rdquo;&amp;rdquo;&amp;quot;&amp;quot;&amp;quot;&amp;quot; Fresh, dried as dried apples, canned applesauce perennial 128 sour cherry Fresh for jam perennial 129 pear trees Fresh perennial 130 fig trees Fresh and dried perennial 131 Mirabelle Fresh and dried, Rumtopf perennial 132 plum tree Fresh and dried, Rumtopf perennial 133 giant cherry Fresh, canned as whole churches, rum topf, jam perennial 134 pastures Early flowering bee food, fences, support sticks, bed edging perennial 135 wild cherries Bee food, shade provider perennial 136 elder tree Dried elderflowers and fresh elderflower juice &amp;amp; liqueur. Berries boiled down to jam with blackberries and elderberry juice, insect food perennial 137 hazelnut trees Nuts fresh and dried, sticks as tendril aids and support sticks. perennial 138 olive tree No significant earnings yet. perennial 139 walnut tree Dried nuts, tannins in the leaves as a decoction for weed suppression on wood chip paths. perennial 140 Sichuan pepper Dried seed coat as a spice perennial 141 Various vines Fresh, cooked with plums for jam, rum pot, traditional Sicilian mostarda (similar to fruit leather) perennial 142 spice laurel Dried as a spice perennial 143 blueberry bushes Fresh as a garden snack, Rumtopf perennial 144 gooseberry Fresh as a garden snack, Rumtopf perennial 145 currant bushes Fresh as a garden snack, rum pot, boiled down to currant jelly perennial 146 quince tree Quince confection, cooking water becomes quince vodka perennial ]]></content:encoded></item><item><title>Why I Deleted WhatsApp and What I've Gained From Living Without It for 5 Years</title><link>https://bytevagabond.com/post/living-without-whatsapp/</link><pubDate>Thu, 01 Dec 2022 21:55:54 UT</pubDate><guid>https://bytevagabond.com/post/living-without-whatsapp/</guid><description>In today&amp;rsquo;s world, it&amp;rsquo;s almost impossible to imagine life without WhatsApp. The popular messaging app has become an integral part of our daily lives, allowing us to stay connected with friends, family, and colleagues no matter where we are. But what if you were to live without WhatsApp for 5 years?
I know this sounds like a daunting task, but hear me out. I recently made the decision to delete WhatsApp from my phone and, after nearly five years of living without it, I can confidently say that it&amp;rsquo;s been a liberating experience.</description><category domain="https://bytevagabond.com/categories/privacy">Privacy</category><content:encoded><![CDATA[In today&amp;rsquo;s world, it&amp;rsquo;s almost impossible to imagine life without WhatsApp. The popular messaging app has become an integral part of our daily lives, allowing us to stay connected with friends, family, and colleagues no matter where we are. But what if you were to live without WhatsApp for 5 years?
I know this sounds like a daunting task, but hear me out. I recently made the decision to delete WhatsApp from my phone and, after nearly five years of living without it, I can confidently say that it&amp;rsquo;s been a liberating experience.
One of the key concerns about WhatsApp is the metadata that is generated by the app. Metadata is essentially data about data, and it can include information such as the sender and recipient of a message, the time and date a message was sent, and the location from which a message was sent. This metadata can be accessed by WhatsApp and potentially used for a variety of purposes, including targeted advertising and user profiling. This has raised concerns about the privacy of WhatsApp users, as this metadata can potentially be used to track users&amp;rsquo; activities and potentially compromise their privacy.
Metadata does matter Another benefit of living without WhatsApp is the increased focus and productivity it allows. Without the constant barrage of notifications and messages, I&amp;rsquo;m able to better focus on the tasks at hand and get more done. This is especially true when it comes to work – without the distractions of WhatsApp, I&amp;rsquo;m able to better prioritize my tasks and get more work done in a shorter amount of time.
In addition to these benefits, I&amp;rsquo;ve also been able to ditch annoying group chats and unnecessary stories. Instead, I&amp;rsquo;ve been using Signal as my primary messaging app. Not only is Signal known for its strong privacy and security features, but it also has a simple and user-friendly interface that makes it easy to communicate with friends and family. And, surprisingly, even my grandma has been able to figure out how to use it!
Of course, living without WhatsApp does have its drawbacks. One of the biggest challenges is staying connected with friends and family who only use the app to communicate. In these cases, I&amp;rsquo;ve had to rely on alternative messaging apps and methods of communication, such as email or phone calls. While these methods may not be as convenient as WhatsApp, they do allow me to stay in touch with the people I care about.
In conclusion, living without WhatsApp for 5 years has been a liberating experience that has afforded me increased privacy, focus, and productivity. I&amp;rsquo;ve been able to ditch annoying group chats and unnecessary stories, and I&amp;rsquo;ve found a great alternative in Signal. While it may not be for everyone, I would encourage anyone who is considering deleting the app to give it a try. You may be pleasantly surprised by the benefits it can bring.
]]></content:encoded></item><item><title>Reverse engineering and creation of a temperature monitoring app</title><link>https://bytevagabond.com/post/temperature-monitoring-app/</link><pubDate>Tue, 27 Sep 2022 10:50:00 UT</pubDate><guid>https://bytevagabond.com/post/temperature-monitoring-app/</guid><description>A client of mine has a product in the form of a sensor that monitors the temperature of medical critical infrastructures, such as medicine refrigerators or organ transports. For the configuration, analysis and alerting of the sensor, he had licensed an app from another provider. The app was actually for another product, but the other software company changed the logo and then decided to call it a day. One day, however, the software company became impudent and demanded €50,000 licence fee per year for a product that was not even tailored to my client&amp;rsquo;s needs.</description><category domain="https://bytevagabond.com/categories/project">Project</category><category domain="https://bytevagabond.com/categories/job">Job</category><category domain="https://bytevagabond.com/categories/freelance">Freelance</category><category domain="https://bytevagabond.com/categories/cross-platform">Cross Platform</category><category domain="https://bytevagabond.com/categories/app">App</category><content:encoded><![CDATA[A client of mine has a product in the form of a sensor that monitors the temperature of medical critical infrastructures, such as medicine refrigerators or organ transports. For the configuration, analysis and alerting of the sensor, he had licensed an app from another provider. The app was actually for another product, but the other software company changed the logo and then decided to call it a day. One day, however, the software company became impudent and demanded €50,000 licence fee per year for a product that was not even tailored to my client&amp;rsquo;s needs. So my client turned to me and asked me if I could develop an app that met his needs as soon as possible.
Join me on a six-week journey that included reverse engineering Chinese BLE specifications, massive performance improvements and responding to drastic changes in customer requirements.
规范 - What is 0xA9-0B-9A-05-03-A8-BB-08 ?! In order for the app to communicate with the sensors, I needed to know which hex values I could send and receive. Fortunately, my client had contact with the manufacturers of the sensors in China. So he asked them for the BLE specifications. It took a while, but I got something&amp;hellip;. In Chinese and far from complete. Using Bluetooth monitoring apps like NRF Connect, I could see that the devices themselves showed much more data than I had in my specification.
One of the main functions of the app is to notify the user remotely when the sensors are out of temperature range. In a way, the app serves as a base station. To do this, I needed the current temperature, which was not available in the specifications. Fortunately, after some observation of the data, I noticed that the last two bytes changed to &amp;ldquo;0xA9-0B-9A-05-03-A8-BB-08&amp;rdquo;. So I converted &amp;ldquo;BB-08&amp;rdquo; and found that this is my temperature. Yes! That&amp;rsquo;s it!
The specs... Building the app I started with the Capacitor framework, which is a cross-platform framework that allows you to build apps for Android, iOS and the web. I had already used Capacitor for a couple of previous projects and was very satisfied with it. The app was to be developed for Android and iOS. The web version was not planned.
As always, I have developed the app in a very modular way, with separation of concerns. This makes it easier to maintain and expand the app. This saved me a lot of headaches in the later development phase, because my client now wanted the sensors to be able to be monitored by several devices at the same time. I had to change the module for BLE communication, but thanks to the modular design I was able to do that without having to change the rest of the app.
Interesting in terms of programming was the rather complex algorithm for alarm monitoring. Alarms can be triggered at different intervals and at a specific time. When an alarm is triggered, it calls a serverless function, which in turn notifies the user in the configured way (email, etc.). This process had to be very stable as the main use case of the product is in the medical field.
While developing the BLE core layer, I also noticed an unused feature of the current app&amp;rsquo;s stack. You could write the MTU size and speed up the temperature synchronisation process by 1000%. This was a very nice side effect.
Screenshot of the app Verdict The app was developed in six weeks and is now in use by my client.
]]></content:encoded></item><item><title>Building a social media startup</title><link>https://bytevagabond.com/post/building-a-social-media-startup/</link><pubDate>Tue, 26 Jul 2022 10:50:00 UT</pubDate><guid>https://bytevagabond.com/post/building-a-social-media-startup/</guid><description>Go straight to what I learned so you don't have to go through the same process. I want to share with you the most comprehensive entrepreneurial journey I have ever undertaken. bonq is an app I developed with UI designer Tien Nguyen from August 2021 to June 2022.
bonq - Share unique emotions with your friends Every day people react with real emotion to memory photos, memes or funny videos.</description><category domain="https://bytevagabond.com/categories/project">Project</category><category domain="https://bytevagabond.com/categories/building">Building</category><category domain="https://bytevagabond.com/categories/social-media">Social Media</category><category domain="https://bytevagabond.com/categories/entrepreneur">Entrepreneur</category><content:encoded><![CDATA[ Go straight to what I learned so you don&#39;t have to go through the same process. I want to share with you the most comprehensive entrepreneurial journey I have ever undertaken. bonq is an app I developed with UI designer Tien Nguyen from August 2021 to June 2022.
bonq - Share unique emotions with your friends Every day people react with real emotion to memory photos, memes or funny videos. Until recently, this bonq moment only took place in person, as the reaction via Messenger is often text-based and emotionally reduced. With the bonq app, you can share your favourite memes and memories and capture immediate, authentic and emotional reactions from your friends in the form of a video:
🤳🏻 Send a bonq
When you find a photo or video that reminds you of your friends, send it to them via bonq. TikToks etc. can also be imported directly into bonq via the native share menu.
✨ Spark curiosity
Your friends receive the content blurred at first. When they click on it, the blur is removed and their first, unique, authentic emotion is recorded.
🤩 Capture emotions
The reaction is sent back to you, adding an emotional dimension to the content and available in your bonq collection so you can relive those moments anytime.
bonq app screenshots (fLTR): 1. collection screen showing all bonqs and bonq groups 2. recording screen when viewing a new bonq 3. overview of reactions to a bonq, clicking on the reaction thumbnails opens reaction video Developing bonq Design Thinking To make the potential users empirically comprehensible, I worked with different design thinking methods. First, I identified fictitious &amp;ldquo;personas&amp;rdquo;, i.e. a prototype for a group of users. Then, based on this persona, I created various &amp;ldquo;customer journey maps&amp;rdquo; to visualise the steps the user has to go through when using the product. These methods helped me to formulate concrete user stories, from which I finally derived the functions of the app and identify the target group of bonq, which is people of Generation Alpha and Z (13-25 years old), who use many social media platforms in their daily lives and with varying frequency send each other digital content via Messenger.
User Tests After creating an MVP and in order to validate the findings and hypotheses, I planned and carried out several out-loud user tests. A user test always required two people, ideally two who already knew each other. The people talk out loud about what they see/think and send some bonqs back and forth to each other. Afterwards, a survey was answered to get more detailed insights on relevant questions. The results of these user tests were factored into the design of the final product.
Implementation To get to market quickly and iterate on all platforms, I used a cross-platform stack. I write only one code base to make bonq available as an app in modern browsers and in the Apple and Google app stores. This allows me to get the maximum reach from day one.
A rough sketch of the app tech stack: Angular as the front end framework; Ionic Capacitor for native UI components and APIs; Web Assembly and Tensorflow for Face Recognition API; Supabase, AWS, NestJS and Firebase as the back end.
bonqs app architecture 2022 Business plan bonq is both a content and messaging platform that also serves the hype topic of reactions. bonq operates in a market with a total potential of 500 billion US dollars. The business model is primarily based on ads. Other sources of revenue are freemium models and ApiFees. With an user base the size of Jodel, another German social medium, 7.5 million users can generate 4 million euros in revenue per year.
bonq between content and messaging platform Learnings Two brains are smarter than one. And if you want to go fast, go alone, but if you want to go far, go together. I&amp;rsquo;ve been programming since my teens, so I&amp;rsquo;m quite capable of developing an app. In my undergraduate degree, I learned how to do the design thinking process for a product and how to build a business around it. However, without the help of Tien&amp;rsquo;s ideation and design skills, bonq wouldn&amp;rsquo;t be what it is today. Find a capable co-founder!
Developing code should take the same amount of time as developing a good marketing strategy. The next time I build a project, I&amp;rsquo;ll split my time 50/50 between marketing and coding. Now it was more like 90/10.
I&amp;rsquo;m not the same programmer I was before this project. I&amp;rsquo;ve learned how to build scalable, low-cost systems, manage a lot of data, keep it in sync and available offline. All while I&amp;rsquo;ve touched pretty much every web api out there and covered all the edge cases for the various platforms (I&amp;rsquo;m looking at you, Safari iOS). I could write an entire book on the architecture and algorithms that went into the development of bonq.
At the first pitch, we were very confident, but we were overwhelmed by questions that we hadn&amp;rsquo;t thought through. For example, &amp;ldquo;How is Snapchat different?&amp;rdquo;. The first pitch was a real motivation killer. But it also helped us prepare answers to those questions and shape bonq&amp;rsquo;s identity. We revised our pitch, answered all the questions, and presented it to an accelerator. And they wanted us to be part of it! I think we had to really mess up first to finally learn how to do the bonq pitch. Our biggest was at the Startup Summer Slam Festival 2022 in front of a few hundred people.
Tien Nguyen and Max Heichling on Stage at the SSSF 2022 You have to have a niche. Social media is not a very good niche to reach a target audience. My social media bots can only do so much to generate traffic. If there are no hashtags or influencers in a niche to target, even black hat marketing won&amp;rsquo;t do much. And I think that&amp;rsquo;s the main reason why it&amp;rsquo;s so difficult for bonq to take off. Social media is not a niche.
There&amp;rsquo;s a lot of support out there. Whether it&amp;rsquo;s workshops, funding, cloud credits, mentorships, platforms to pitch. Just find your next accelerator and scholarships.
And finally. As great as the challenge may be at the beginning.
One step at a time. If you keep going, everything will fall into place, and you will learn a lot, at least.
You can do the thing!
You can do the thing! ]]></content:encoded></item><item><title>Goodbye, Ticket Monopolists: How a Decentralized Ticketing Platform can change the Game</title><link>https://bytevagabond.com/post/decentralized-ticketing-platform/</link><pubDate>Wed, 13 Apr 2022 10:50:00 UT</pubDate><guid>https://bytevagabond.com/post/decentralized-ticketing-platform/</guid><description>Ever been annoyed by high fees and restrictive conditions from ticket monopolists like Ticketmaster? Worry no more! This blog post outlines a project that proposes a new, decentralised approach to ticketing, designed to create a fairer experience and offer an innovative perspective on the ticketing industry.
The Magic Behind the Curtain: IOTA&amp;rsquo;s Shimmer Network The project is built upon IOTA&amp;rsquo;s Shimmer Network, a distributed ledger technology that allows the creation and transfer of non-fungible tokens (NFTs).</description><category domain="https://bytevagabond.com/categories/project">Project</category><category domain="https://bytevagabond.com/categories/building">Building</category><category domain="https://bytevagabond.com/categories/decentralisation">Decentralisation</category><category domain="https://bytevagabond.com/categories/entrepreneur">Entrepreneur</category><category domain="https://bytevagabond.com/categories/iota">Iota</category><content:encoded><![CDATA[Ever been annoyed by high fees and restrictive conditions from ticket monopolists like Ticketmaster? Worry no more! This blog post outlines a project that proposes a new, decentralised approach to ticketing, designed to create a fairer experience and offer an innovative perspective on the ticketing industry.
The Magic Behind the Curtain: IOTA&amp;rsquo;s Shimmer Network The project is built upon IOTA&amp;rsquo;s Shimmer Network, a distributed ledger technology that allows the creation and transfer of non-fungible tokens (NFTs). In this case, NFTs are used to represent unique, non-exchangeable digital tickets. The best part? Shimmer offers free transactions and built-in royalties for NFTs, which means artists and ticket creators are automatically involved in the sales of their digital objects.
The Dream Team: IPFS-based OrbitDB, NestJS, and Angular The project utilizes an IPFS-based OrbitDB for data storage, which provides a decentralized, distributed database solution that ensures scalability and robustness. NestJS, a modern and scalable backend framework, and Angular, a widely used frontend framework, help create a user-friendly and reactive web application that connects with the Shimmer Network and the OrbitDB.
Architecture of the decentralized ticketing platform using IPFS-based OrbitDB, NestJS, and Angular How it Works: Ticket Creation, Purchase, and Redemption The process of creating, purchasing, and redeeming tickets is made easy and seamless through the collaboration of various components. The Angular client allows users to interact with the platform and perform actions like creating, buying, and redeeming tickets. A NestJS API manages a central wallet for transactions and interactions with the Shimmer Network and the OrbitDB.
To create or purchase a ticket, the client contacts the API gateway, which generates a new wallet address and sends it to the client. The client then sends the required funds through the distributed ledger. Once the funds arrive at the API wallet, the generated or transferred tickets are sent to the client&amp;rsquo;s wallet, completing the ticket creation and purchase process.
Process of creating and buying NFT tickets The ticket redemption process is equally straightforward. The client sends a ticket redemption request to the API Gateway, which checks the P2P database for a redemption entry. If the ticket has not been redeemed, the API Gateway generates a redemption token and sends it to the client. The client then signs the token with their private key and sends it back to the API Gateway, which verifies the information and marks the ticket as redeemed.
Ticket redemption process Demo Time! A Visual Guide to the Ticket Redemption Process The ticket redemption process is designed to be user-friendly and secure. Below, we break down the steps with accompanying images to help you understand how the platform ensures a seamless experience.
Starting the Ticket Redemption Process: As shown below, users initiate the ticket redemption process by signing the redemption token created by the server, using their TanglePay Wallet.
Sign the redemption token with the private wallet Redeemable QR Code with Redemption Data: A redeemable QR code containing redemption data is displayed. Users utilise this QR code to redeem their tickets.
Redeemable QR code with redemption data Ticket Validator Interface: The validator reads the QR code and verifies if the ticket is valid.
Ticket validator interface Successfully Redeemed Ticket: Lastly, we see a successfully redeemed ticket. The platform updates the ticket status and prevents any further use.
Successfully redeemed ticket The demonstration of the decentralized ticketing platform showcases the functionality and security of the developed system. It offers a user-friendly interface while ensuring a secure and transparent management of tickets.
Outlook Despite the progress made, there are still some central instances, such as the API Gateway, that could be decentralized in future developments. In the future, native Layer-1 assets could be processed on a Layer-2 EVM network. This would make it possible to further decentralize central instances and increase the security and scalability of the system.
Another issue is the lack of a stable coin immune to the exchange rate fluctuations of the balance tokens. The introduction of such a stable coin would help to increase user trust in the system and further promote the acceptance of the solution.
In many cases, monopolists have contracts with venues to ensure planning security. One way to make event organizers more independent of the platforms would be to use smart contracts to conclude such contracts directly between venues and artists. This would provide both organizers and artists with more flexibility, fairness, and independence.
Conclusion This decentralized ticketing project, built upon IOTA&amp;rsquo;s Shimmer Network and leveraging modern technologies like IPFS-based OrbitDB, NestJS, and Angular, promises to increase transparency, fairness, and eliminate high fees in the ticketing process once it is launched. It&amp;rsquo;s time to say goodbye to ticket monopolists and welcome a new era of ticketing! 🎉
]]></content:encoded></item><item><title>Going fully remote - maximum freedom</title><link>https://bytevagabond.com/post/going-fully-remote/</link><pubDate>Thu, 25 Jun 2020 10:50:00 UT</pubDate><guid>https://bytevagabond.com/post/going-fully-remote/</guid><description>After graduating from college in 2015, I went into a gap year. However, according to the Vietnamese news agencies, my trip was somewhat different. With the help of a friend I have translated one of many (1, 2, 3,...) news articles about this journey. An interesting story about the western guy who brought the whole &amp;lsquo;motorbike camper&amp;rsquo; to Vietnam This German student is on the road with his motorbike and a &amp;lsquo;camper van&amp;rsquo;, which is fitted directly at the side of his motorbike.</description><category domain="https://bytevagabond.com/categories/travel">Travel</category><category domain="https://bytevagabond.com/categories/personal">Personal</category><content:encoded><![CDATA[ After graduating from college in 2015, I went into a gap year. However, according to the Vietnamese news agencies, my trip was somewhat different. With the help of a friend I have translated one of many (1, 2, 3,...) news articles about this journey. An interesting story about the western guy who brought the whole &amp;lsquo;motorbike camper&amp;rsquo; to Vietnam This German student is on the road with his motorbike and a &amp;lsquo;camper van&amp;rsquo;, which is fitted directly at the side of his motorbike.
His name is Max Heichling. A 19 year old German student. He has travelled to Thailand, Cambodia and Laos and is currently visiting Vietnam on his way to Saigon.
The story of this foreigner is told on the personal Facebook page of a Vietnamese man named Trung Pham.
The photos posted on Trung Pham&amp;rsquo;s Facebook page about this special man riding his special motorbike went viral on social media sources and received many &amp;ldquo;favours&amp;rdquo;. Many people are interested in this special character because of the way he chose to travel.
Trung Pham also said that he met this student only by chance. &amp;ldquo;As I was walking in the street, when I passed the bus station, I saw people in crowds watching this foreigner and I thought that they must be harassing this man. I came closer and asked the people. They said: He was cooking dinner. When I saw that a woman that was cooking for him, she said : &amp;ldquo;I went home and came back to buy food for him. The problem was that he did not speak Vietnamese, only English&amp;rdquo;.
He brought some eggs with a mini gas cooker and some fish balls in a box. The woman urged him to let her cook fried eggs for him.
Vietnamese woman cooking eggs with Max Then she turned around and said: &amp;ldquo;If someone speaks English, talk to him, tell him to wait a moment. I will bring him Tomatoes, melons and salad for dinner &amp;hellip; So I interrupted to talk to this guy&amp;hellip; He said that he is a german university student, 19 years old.&amp;rdquo;.
The guy from the West is a cooking talent This student told us that he bought his motorbike in Thailand and then designed a sidecar to have a sleeping place. and kitchen. Very practical. The vehicle has a &amp;ldquo;bedroom&amp;rdquo; inside, which can accommodate 1 person, very practical. The sidecar also has a kitchen with a cooker, on the roof he has a solar panel that charges a car battery in the floor, a power adapter, a 20 litre water tank and a 20 litre petrol tank &amp;hellip; Generally all the tools you need to live.
On Facebook, Trung Pham also said that their conversation didn&amp;rsquo;t last long because the language barrier was too big, but he felt that Max was a fascinating, funny and very loving person. The traveled from Thailand, Cambodia and Laos, currently Vietnam. In Vietnam, he will visit Kon Tum, Gia Lai, Buon Me Thuot and Saigon. Now he travels to Saigon to meet a friend from Germany.
The sleeping place is only for one person, the journey is cramped, but for the locals it seems very reasonable. When asked how he takes a bath. The guy from the west smiled friendly and told us that you only need shave your head, so it is not necessary to wash your hair and then simply apply soap and water to the skin. Clothing can be washed and then hung on the back of the vehicle. You simply let it dry while driving.
When he was told to lock up his sleeping place and make sure he didn&amp;rsquo;t lose his clothes, he laughed and said: &amp;ldquo;Don&amp;rsquo;t worry. Because there are good people here. This is my first time in Vietnam, but when I come here, everyone is nice. Everyone is friendly, happy and willing to help me&amp;rdquo;. Trung Pham said: &amp;ldquo;In the eyes of the German student there are only joy and excitement in his own experience&amp;rdquo;.
Everybody helps him to repair the vehicle. A young student who is passionate enough to do something like this is truly admirable. This is probably the most extraordinary student we have ever met&amp;hellip;
After a night of thinking about his life, Max decided to take a long trip to Thailand. Everyone said that Southeast Asia has a long history, many cultural beauties and the people are very friendly.
When he arrived in Thailand, Max wanted to buy a Tuk Tuk and have it repaired, but the cost was too high, and the local traffic Laws did not allow the vehicle to drive in urban areas, perhaps because of the large amount of CO2 it emitted.
The Isan region of Thailand has a special Samlo, and he decides to own one. This is a motorbike that is designed to have a large space on the side to carry more people or objects.
Max plans to travel by motorbike from Thailand to Germany, passing Myanmar, India, Nepal, Pakistan, Iran, Turkey, Belgium, Serbia, Hungary and Austria. He started his route in Thailand and travelled to Laos and now to Vietnam.
When Max&amp;rsquo;s parents heard this decision, it was really unpleasant. But they believe that their son has grown up and is able to make the journey safely. They bid him farewell from their home, the city of Augsburg in southern Germany.
Max stayed in the Thai city of Khon Kaen for about a week to visit his friends here and admire the beauty of this Buddhist city. A friend from Germany also came from BangKok in a long-distance bus and explored the city with him.
He then went to Laos and stayed in an area in the mountains because he brought ropes, roofs and hammocks, so that he could set up a small hut to spend the night in the forest. There is also a cooker and a solar system on the self-built car. panel. Max&amp;rsquo;s trip is really full of comfort.
The people of Lam Dong are very interested in the story of the young man. Max went along the Ho Chi Minh Trail to Dak Choong in Kon Tum Province, and had a good time in Da Lat. The people here are very excited about his car and his trip.
When asked about Vietnamese food, he said that since he did not stay in Vietnam for long, he still had not had the opportunity to enjoy the country&amp;rsquo;s specialities to the full. The most impressive thing is the shrimp paste: &amp;ldquo;I honestly cannot stand the taste of this dish&amp;rdquo;.
When he arrived in Saigon on the morning of 10 August, the homemade motorbike had to be parked outside the city. He stayed overnight with his friends in District 10, near the Polytechnic University. HCM city. After leaving HCMC Max will spend three weeks in Vietnam. This student will come to the capital Hanoi, move to Laos and travel through 13 countries to return to Germany.
To the question whether he will return to Vietnam in the future? He said he would like to return and have more time to visit the Tourist attractions as well as enjoying the special dishes of Vietnam. He also has many friends in Vietnam, so the next Trip to Vietnam will be very interesting.
]]></content:encoded></item><item><title>A fully decentralised, easy accessible app</title><link>https://bytevagabond.com/post/shadow-economy/</link><pubDate>Sat, 09 May 2020 10:50:00 UT</pubDate><guid>https://bytevagabond.com/post/shadow-economy/</guid><description>The vision Imagine an application that could do it all&amp;hellip; Something between reddit, instagram, telegram and eBay. Now imagine everyone on this platform can be anonymous - no IP log, no nothing. The application would not be served from a single server, instead it would be decentralised and accessible through a regular browser. This means, that anyone can participate and communicate / buy / sell anything they want. Because of its decentralised nature, the application could not be stopped or shut down by a government entities.</description><category domain="https://bytevagabond.com/categories/future">Future</category><content:encoded>The vision Imagine an application that could do it all&amp;amp;hellip; Something between reddit, instagram, telegram and eBay. Now imagine everyone on this platform can be anonymous - no IP log, no nothing. The application would not be served from a single server, instead it would be decentralised and accessible through a regular browser. This means, that anyone can participate and communicate / buy / sell anything they want. Because of its decentralised nature, the application could not be stopped or shut down by a government entities. That’s going to allow everyone on this planet to begin engaging directly with each other on the regular Internet and bypassing a lot of middle man that kept us away from each other for so long. We can have direct engagement now. This is the revolution! It evens the playing field…
The execution Without questioning why you would build something like that, let&amp;amp;rsquo;s talk about how you would build it.
Database Of course you need a database for this app. But it can not be a traditional database hosted on a server. Instead it has to be a database hosted on multiple nodes and the participating clients. A decentralised database with conflict-free replicated data types, so that replicas can be updated independently and concurrently without coordination between the replicas. This specification narrows it down to two possible peer to peer databases: OrbitDB and GUN. The former uses IPFS and the latter uses WebSockets as a way to store the data. The problem with these two (in fact probably any p2p database) is, that they require in the case of IPFS a bootstrapping and GUN a signaling server. The IP addresses of these servers would be known publicly and can therefore be blocked by the authorities. But maybe we found a solution to this&amp;amp;hellip;
Removing the achilles heel While digging through the code of IPFS I found this discussion about censorship resistance. One comment suggested a couple of methods for querying a list of available bootstrap nodes. This is a mediocre, not fail-proof solution. Aymeric Vitte finally jumps into the conversation and suggests implementing a project of his, node-Tor, to antonymies the IP addresses of the bootstrapping servers and peers. It is a javascript open source implementation of the Tor protocol on server side and inside browsers. It is not fully production ready but could be the answer to the final problem.
Decentralised Identity Finally we need a way to authenticate the users. The W3C published a decentralised identity standard and iota picked it up and is working on making a SDK based on iota called identity.ts.
Verdict Sooner or later our vision will come true. We are discussing and probably soon actually starting to build a completely decentralised, easy accessible web 3.0 application. If you want to be a part, please contact me.</content:encoded></item><item><title>Food for thought</title><link>https://bytevagabond.com/post/relevant-quotes/</link><pubDate>Mon, 04 May 2020 10:50:00 UT</pubDate><guid>https://bytevagabond.com/post/relevant-quotes/</guid><description>Surveillance Capitalism So, famously, industrial capitalism claimed nature. Innocent rivers, and meadows, and forests, and so forth, for the market dynamic to be reborn as real estate, as land that could be sold and purchased. Industrial capitalism claimed work for the market dynamic to be reborn as labor that could be sold and purchased. Now, here comes surveillance capitalism, following this pattern, but with a dark and startling twist. What surveillance capitalism claims is private, human experience.</description><category domain="https://bytevagabond.com/categories/future">Future</category><content:encoded><![CDATA[Surveillance Capitalism So, famously, industrial capitalism claimed nature. Innocent rivers, and meadows, and forests, and so forth, for the market dynamic to be reborn as real estate, as land that could be sold and purchased. Industrial capitalism claimed work for the market dynamic to be reborn as labor that could be sold and purchased. Now, here comes surveillance capitalism, following this pattern, but with a dark and startling twist. What surveillance capitalism claims is private, human experience. Private, human experience is claimed as a free source of raw material, fabricated into predictions of human behavior. And it turns out that there are a lot of businesses that really want to know what we will do now, soon, and later. You are not using social media, social media is using you. You are not searching Google, Google is searching you Shoshana Zuboff Sharing Economy Let&amp;#39;s say you go up on this internet of things infrastructure you have a lot of data coming through right. You can take the data you care about in your business and mine it with your own analytics create your own algorithms and apps so you can dramatically increase your aggregate efficiency at every conversion on your value chain. Your logistics, your production, your distribution, your recycling and dramatically plunge your carbon footprint. Because you&amp;#39;re getting more out of less of the earth in a circular economy. And you plunge your fixed in marginal cost. Some of the marginal is already getting so low that it&amp;#39;s forcing a metamorphose change of capitalism itself. This is coming from the inside. Let me explain: In economics we always teach our students that the optimum market is where you sell at marginal cost. You want to put out cheap products, cheap services, win over market share, bring some profits back to the investors. Correct that&amp;#39;s the optimal market. It&amp;#39;s just we didn&amp;#39;t anticipate a digital revolution that could be so powerful in its efficiency taking us from 20% of aggregate efficiency to 60%. Those are our studies with our global team so the digital technology allows the marginal cost to really plunge and then your profit margins shrink. This is forcing a shift in the capitalist business model from the inside. Market capitalism is too slow for the digital revolution. Markets are transactional. You have a seller, a buyer. They come together, they alienate the good or service and then it&amp;#39;s over. And in between you have marketing advertising costs, warehouses, you have to pay your employees, insurance etc. too slow with low marginal cost so: We&amp;#39;re going to have to move from transactions in markets to flows in networks. We&amp;#39;re going to move from ownership to access, from sellers and buyers to providers and users. We&amp;#39;re going to move from productivity to regenerativity, from GDP to quality of life indicators from externality to circularity. And it&amp;#39;s this digital interconnectivity that allows us to do every one of these things in real time. When the marginal costs become low the only way you keep your margins is to blockchain them or some other way and by 24/7 operation of traffic and provider user networks there&amp;#39;s no downtime. Jeremy Rifkin Democracy is failing When people say: &amp;#34;You have nothing to hide then you have nothing to fear&amp;#34; they really mean: It&amp;#39;s fine the way it is we take care of everything. You give up your power and your freedom and we take care of you. Trust us. But a democracy is not about trust, it&amp;#39;s only about co-determination. I think it is unrealistic to rely on altruism or self-sacrifice. People believe that meaningful engagement, trying to change things for the better or fighting injustice, that it costs them more than it benefits them. that has to change. The last ten years are perhaps the best example in modern history of how effective the politics of fear is. Fear is a successful political strategy to undermine our most precious values, our heritage, our history, our laws and rights. All a politician needs to say is that this is because of terrorism. That&amp;#39;s enough. You just have to repeat it often enough like an incantation and rights will vanish into thin air and the law can be passed. what if we could change this mechanism. Imagine you are 20 years older, you have three kids in college, a mortgage on your house and a partner who has made a career. Then your refusal to do the right thing for the common good happens out of consideration or affection for your family because their life is so closely linked to you. There are even more of the older people in it who are already established. They are in many ways the establishment. I don&amp;#39;t mean the elite but ordinary citizens. They have lost much of their democratic influence because they are afraid of losing something. We have to take away their fear. That&amp;#39;s the point. Why is democracy so important to us? Why do we fight so hard for it? Because it&amp;#39;s about the right to self-determination! Ordinary people can do unusual things and if they don&amp;#39;t then there is something much bigger at stake. Edward Snowden Freedom of speech in the age of Social Media Freedom of speech does not mean freedom of reach
Sacha Baron Cohen ]]></content:encoded></item><item><title>Offline-First Cross-Platform with Ionic and CouchDB</title><link>https://bytevagabond.com/post/webdev-endgame-2020/</link><pubDate>Wed, 15 Jan 2020 08:52:44 UT</pubDate><guid>https://bytevagabond.com/post/webdev-endgame-2020/</guid><description>I created a full stack template to quickly build nativ cross-platform apps with one web code base: CAIN-Stack Code &amp; Docs Intro With Steve Jobs originally presenting the idea of web apps &amp;ldquo;that look exactly and behave exactly like native apps&amp;rdquo; 12 years ago and 4 years since the term &amp;ldquo;PWA&amp;rdquo; was coined, How close are we to a webpage that behaves like a native app? First lets take a look what makes an app native:</description><category domain="https://bytevagabond.com/categories/tutorials">Tutorials</category><category domain="https://bytevagabond.com/categories/code">Code</category><content:encoded><![CDATA[ I created a full stack template to quickly build nativ cross-platform apps with one web code base: CAIN-Stack Code &amp; Docs Intro With Steve Jobs originally presenting the idea of web apps &amp;ldquo;that look exactly and behave exactly like native apps&amp;rdquo; 12 years ago and 4 years since the term &amp;ldquo;PWA&amp;rdquo; was coined, How close are we to a webpage that behaves like a native app? First lets take a look what makes an app native:
Look &amp;amp; feel Data access anytime Native Apis I will show you, how you can develop a web application, that just works as well as a native app.
The look &amp;amp; feel Low effort big result: webmanifest &amp;amp; serviceworker I assume you are using Angular, but for other frameworks the principles should be the same. To turn your web app into a PWA, it takes 3 basic steps:
Run ng add @angular/pwa inside your project. This will create two files: manifest.webmanifest and ngsw-config.json and automatically embed them into the index.html.
Install the pwa-asset-generator and run it inside your project, like so :
pwa-asset-generator ./assets/icon/favicon.png ./assets/icons -b &amp;#34;#292d3E&amp;#34; -i ./index.html -m ./manifest.webmanifest This will create all necessary icons and splashscreens for all platforms, taking the favicon and a color code as parameters. It will also update your index.html and webmanifest accordingly.
Update your ngsw-config.json to specify which resources you want to stay in the cache. Choose your install and update mode. I am using prefetch at entkraefter.pro, because i want the audios to be available as soon as possible and not lazy load them. { &amp;#34;name&amp;#34;: &amp;#34;assets&amp;#34;, &amp;#34;installMode&amp;#34;: &amp;#34;prefetch&amp;#34;, &amp;#34;updateMode&amp;#34;: &amp;#34;prefetch&amp;#34;, &amp;#34;resources&amp;#34;: { &amp;#34;files&amp;#34;: [ &amp;#34;/assets/**&amp;#34;, &amp;#34;/assets/mp3/**&amp;#34; ] } } Ionic - Angular just better Ionic is a web framework that was originally only compatible with Angular, but with the redesign in Ionic 4 to use standardized web components, it is now also available for React, Vue and plain Javascript. Its main goal is to provide an UI toolkit for developing high-quality cross-platform apps for native iOS, Android, and the web—all from a single codebase. Ionic is open source and has been around since 2013, providing web components, that automatically style and behave like the host system their running on. But that is not all, it has many more nice features, like a grid system (so no need for bootstrap) and especially for Angular, out-of-the-box lazy loading, advanced life cycle hooks and more. The Ionic CLI is very helpful and powerful. Check out the docs to get started.
The data - an offline first approach So far we have create a dumb website that can be installed on 93% on devices of current users browsing the web. Let&amp;rsquo;s make it dynamic and smart by adding an offline Database with PouchDB and syncing with a remote CouchDB instance. We will also take a look on how authentication and user roles can work with superlogin and nano. Let&amp;rsquo;s start.
Brief introduction to NoSQL In a relational database, data is stored in a set of tables that are made of of rows and columns. A Structured Query Language that consists of keywords like SELECT, FROM, WHERE, and JOIN can be used to query these tables for information.
Tables in a relational database have a pre-defined “schema”. A schema defines the structure of the tables, and the type of data that will be stored in them. In MySQL, for example, you might define a “schema” for a table like this:
CREATE TABLE Cars ( id INT(6) AUTO_INCREMENT PRIMARY KEY, make VARCHAR(30), model VARCHAR(30), year INT(6), purchased DATETIME ) Unlike a relational database, a NoSQL database has no predefined schema and does not store data using related tables. NoSQL is not one specific thing, but in general a NoSQL database is not relational. CouchDB is a document based NoSQL database. A document in this context is simply a JSON object like this:
{ &amp;#34;_id&amp;#34;: 1, &amp;#34;name&amp;#34;: &amp;#34;Max&amp;#34;, &amp;#34;country&amp;#34;: &amp;#34;Austria&amp;#34;, &amp;#34;interests&amp;#34;: [&amp;#34;Ionic&amp;#34;, &amp;#34;IOTA&amp;#34;, &amp;#34;Insect Protein&amp;#34;] } Introduction to CouchDB &amp;amp; PouchDB There are many advantages to using CouchDB including the ease of which it can be scaled, and the speed of read and write operations, but the killer feature when it comes to mobile applications is its ability to synchronize between multiple databases. A CouchDB database implements a RESTful API, which means we can interact with it using HTTP methods like GET, PUT, POST, and DELETE. So, if we wanted to read some data from a CouchDB database we might make a GET request to the following URL: http://someserver.com/mydatabase/_design/posts/_view/by_date_published. When installing CouchDB locally it comes with a web gui called Fauxton, to modify databases.
PouchDB is a CouchDB style database that runs locally on the user’s device. It can be used independently, or it can be used in conjunction with other remote CouchDB databases.
When using the PouchDB library in your Ionic application, you could trigger a sync between a local database (i.e. one running on the users phone) and a remote database (running on a server) with a single line of code:
PouchDB.sync(&amp;#39;mydb&amp;#39;, &amp;#39;http://url/to/remote/database&amp;#39;); To provide easy scalability, fast reading and writing, and synchronization, CouchDB prioritizes Partition Tolerance and Availability over Consistency, unlike traditional MySQl Databases. If two users try to update a document, one online and one offline, the updated doc of the offline user will be rejected when he comes back online. CouchDB assigns documents with a “revision” number, which is stored in the _rev field. If the _rev fields match, CouchDB will process the update and increment the _rev number. If the _rev fields do not match, the update will be rejected. What this means for our situation with two simultaneous users, is that whoever syncs their update to the remote database first will “win” and have their update accepted.
The following is an example of a data.service that handles database operations and synchronization to databases the user has access to. For this to work, you need to add the DataService into the constructor of your app.component.ts. The initDatabase function is triggered by the auth.service, when the user logs into the application. We will take a look at this in a second.
import { Injectable } from &amp;#39;@angular/core&amp;#39;; import * as PouchDB from &amp;#39;pouchdb/dist/pouchdb&amp;#39;; import { UserService } from &amp;#39;./user.service&amp;#39;; @Injectable({ providedIn: &amp;#39;root&amp;#39; }) export class DataService { public dbs = null; private remoteAddress = []; private remoteName = []; constructor(private userService: UserService) {} initDatabase(remote): void { this.remoteAddress = Object.values(remote.userDBs); this.remoteName = Object.keys(remote.userDBs); this.dbs = {}; // save PouchDB instances and remote address, that the user has access to for (let i = 0; i &amp;lt; this.remoteName.length; i&#43;&#43;) { this.dbs[this.remoteName[i]] = new PouchDB(this.remoteName[i], { auto_compaction: true }); this.dbs[this.remoteName[i]].address = this.remoteAddress[i]; } this.initRemoteSync(); } initRemoteSync(): void { const options = { live: true, retry: true, }; for (const db in this.dbs) { const dbRemote = this.dbs[db].address; this.dbs[db].sync(dbRemote, options); } } // Database operations, dbname is provided by service createDoc(doc, dbname): Promise&amp;lt;any&amp;gt; { return this.dbs[dbname].post(doc); } updateDoc(doc, dbname): Promise&amp;lt;any&amp;gt; { return this.dbs[dbname].put(doc); } deleteDoc(doc, dbname): Promise&amp;lt;any&amp;gt; { return this.dbs[dbname].remove(doc); } } The remote server.js - authentication and creating users Our remote server mainly consists of three files: server.js to initialize the server, superlogin.config.js to configure superlogin and superlogin.controller.js to handle user creation and authentication. Your package.json should look like the following:
{ &amp;#34;name&amp;#34;: &amp;#34;my-server&amp;#34;, &amp;#34;version&amp;#34;: &amp;#34;1.0.0&amp;#34;, &amp;#34;description&amp;#34;: &amp;#34;&amp;#34;, &amp;#34;main&amp;#34;: &amp;#34;server.js&amp;#34;, &amp;#34;scripts&amp;#34;: { &amp;#34;start&amp;#34;: &amp;#34;node server.js&amp;#34; }, &amp;#34;dependencies&amp;#34;: { &amp;#34;superlogin&amp;#34;: &amp;#34;^0.6.1&amp;#34;, &amp;#34;body-parser&amp;#34;: &amp;#34;^1.17.2&amp;#34;, &amp;#34;cors&amp;#34;: &amp;#34;^2.8.3&amp;#34;, &amp;#34;couch-pwd&amp;#34;: &amp;#34;github:zemirco/couch-pwd&amp;#34;, &amp;#34;del&amp;#34;: &amp;#34;^3.0.0&amp;#34;, &amp;#34;express&amp;#34;: &amp;#34;^4.15.3&amp;#34;, &amp;#34;https&amp;#34;: &amp;#34;^1.0.0&amp;#34;, &amp;#34;method-override&amp;#34;: &amp;#34;^2.3.9&amp;#34;, &amp;#34;morgan&amp;#34;: &amp;#34;^1.8.2&amp;#34;, &amp;#34;nano&amp;#34;: &amp;#34;^8.1.0&amp;#34; } } The server.js should look like this:
const express = require(&amp;#39;express&amp;#39;); const https = require(&amp;#39;https&amp;#39;); const bodyParser = require(&amp;#39;body-parser&amp;#39;); const logger = require(&amp;#39;morgan&amp;#39;); const cors = require(&amp;#39;cors&amp;#39;); const superloginController = require(&amp;#39;./controllers/superlogin.controller.js&amp;#39;); const app = express(); app.use(logger(&amp;#39;dev&amp;#39;)); app.use(bodyParser.json()); app.use(bodyParser.urlencoded({ extended: false })); app.use(cors()); superloginController.initSuperLogin(app); app.listen(process.env.PORT || 8080); All it does is to initialize our express server. The more interesting part is superlogin. It does all the heavy lifting for us when it come to
Logging In Logging out Account Creation Validating usernames Validating emails This is not the limit of what superlogin can do. Check out the docs for more information. The full potential can be seen in this sample config.
As PouchDB always syncs at least a part of a whole database, we need to think about how we structure our database to avoid unauthorized access. A database per user setup is the best option. Let&amp;rsquo;s take a look at our superlogin.config.js:
module.exports = { dbServer: { protocol: &amp;#39;http://&amp;#39;, host: &amp;#39;127.0.0.1:5984&amp;#39;, user: &amp;#39;admin&amp;#39;, password: &amp;#39;couchdb&amp;#39;, cloudant: false, userDB: &amp;#39;couchdb-users&amp;#39;, couchAuthDB: &amp;#39;_users&amp;#39; }, security: { maxFailedLogins: 5, lockoutTime: 600, tokenLife: 604800, // one week loginOnRegistration: false, defaultRoles: [&amp;#39;user&amp;#39;] }, mailer: { fromEmail: &amp;#39;gmail.user@gmail.com&amp;#39;, options: { service: &amp;#39;Gmail&amp;#39;, auth: { user: &amp;#39;gmail.user@gmail.com&amp;#39;, pass: &amp;#39;userpass&amp;#39; } } }, userDBs: { defaultDBs: { shared: [&amp;#39;shared&amp;#39;], private: [&amp;#39;private&amp;#39;] } }, providers: { local: true }, userModel: { whitelist: [&amp;#39;isAdmin&amp;#39;], isAdmin: false, }, }; In userDBs we specify to which database our user has access to. We define a shared database where any authorized user has access to and a private database, which only our user can access. If we want to have different user roles, let&amp;rsquo;s say an admin, who has access to all the other users, we need to modify the userModel to create an isAdmin field. Let&amp;rsquo;s take a look at our superlogin.controller.js to see how we create users and give admins special access.
const nano = require(&amp;#39;nano&amp;#39;)(&amp;#39;http://admin:couchdb@localhost:5984&amp;#39;); const superloginConfig = require(&amp;#39;../config/superlogin.config.js&amp;#39;); const SuperLogin = require(&amp;#39;superlogin&amp;#39;); module.exports.initSuperLogin = app =&amp;gt; { // Initialize SuperLogin const superlogin = new SuperLogin(superloginConfig); // Mount SuperLogin&amp;#39;s routes to our app app.use(&amp;#39;/auth&amp;#39;, superlogin.router); // Create superlogin event emitter superlogin.on(&amp;#39;signup&amp;#39;, function(userDoc, provider) { // opts for replication const opts = { continuous: true, create_target: true, // exclude design documents selector: { &amp;#34;_id&amp;#34;: { &amp;#34;$regex&amp;#34;: &amp;#34;^(?!_design\/)&amp;#34;, } } }; // get private DB name const regex = /^private\$.&#43;$/; let privateDB; for (let dbs in userDoc.personalDBs) { console.log(dbs) if (regex.test(dbs)) { privateDB = dbs; } } // Replicate design documents to private DB from userDoc nano.db.replicate(&amp;#39;user-resources&amp;#39;, privateDB).then((body) =&amp;gt; { return nano.db.replication.query(body.id); }).then((response) =&amp;gt; { // console.log(response); }); if (userDoc.isAdmin) { // Replicate AdminDB to AdminUsers nano.db.replication.enable(&amp;#39;admin-database&amp;#39;, privateDB, opts).then((body) =&amp;gt; { return nano.db.replication.query(body.id); }).then((response) =&amp;gt; { // console.log(response); }); } else { // Enable replication from userDB to adminDB nano.db.replication.enable(privateDB, &amp;#39;admin-database&amp;#39;, opts).then((body) =&amp;gt; { return nano.db.replication.query(body.id); }).then((response) =&amp;gt; { // console.log(response); }); } }) } First we initialize superlogin and mount our routes to the /auth api. Next up we tell superlogin to listen to signup events. We utilise nano to interact with CouchDB and set-up a few replications across our user and admin databases.
In the opts object we define to replicate everything, but design documents. Design documents are special CouchDB documents that help us to filter documents. We the run a for loop over the userDoc to get the name of our private user database. Next up we replicate design documents, that we need in our private database from the database user-resources.
If the userDoc does not have isAdmin field we replicate our privateDB to the admin-database. If the user does have the isAdmin field, the replicate all docs form the admin-database to the private database of our admin user.
That&amp;rsquo;s it for the server side part. Let&amp;rsquo;s go back to our Ionic application to finish the auth.service and user.service.
Authentication on the client side Update src/environments/environment.ts to reflect the following:
export const environment = { production: false, }; export const SERVER_ADDRESS = &amp;#39;http://localhost:8080/&amp;#39;; Our auth.service.ts looks like this:
import { Injectable, NgZone } from &amp;#39;@angular/core&amp;#39;; import { HttpClient, HttpHeaders } from &amp;#39;@angular/common/http&amp;#39;; import { NavController } from &amp;#39;@ionic/angular&amp;#39;; import { UserService } from &amp;#39;./user.service&amp;#39;; import { DataService } from &amp;#39;./data.service&amp;#39;; import { SERVER_ADDRESS } from &amp;#39;../../environments/environment&amp;#39;; @Injectable({ providedIn: &amp;#39;root&amp;#39; }) export class AuthService { constructor( private http: HttpClient, private userService: UserService, private dataService: DataService, private navCtrl: NavController, private zone: NgZone ) { } authenticate(credentials) { return this.http.post(SERVER_ADDRESS &#43; &amp;#39;auth/login&amp;#39;, credentials); } logout() { const headers = new HttpHeaders(); headers.append(&amp;#39;Authorization&amp;#39;, &amp;#39;Bearer &amp;#39; &#43; this.userService.currentUser.token &#43; &amp;#39;:&amp;#39; &#43; this.userService.currentUser.password); this.http.post(SERVER_ADDRESS &#43; &amp;#39;auth/logout&amp;#39;, {}, { headers }).subscribe((res) =&amp;gt; { }); // destroy all databases for (const db in this.dataService.dbs) { this.dataService.dbs[db].destroy().then((res) =&amp;gt; { console.log(res); } , (err) =&amp;gt; { console.log(&amp;#39;could not destroy db&amp;#39;); }); } this.dataService.dbs = null; this.userService.saveUserData(null); this.navCtrl.navigateRoot(&amp;#39;/login&amp;#39;); } register(details) { return this.http.post(SERVER_ADDRESS &#43; &amp;#39;auth/register&amp;#39;, details); } validateUsername(username) { return this.http.get(SERVER_ADDRESS &#43; &amp;#39;auth/validate-username/&amp;#39; &#43; username); } validateEmail(email) { const encodedEmail = encodeURIComponent(email); return this.http.get(SERVER_ADDRESS &#43; &amp;#39;auth/validate-email/&amp;#39; &#43; encodedEmail); } reauthenticate() { return new Promise((resolve, reject) =&amp;gt; { if (this.dataService.dbs === null) { this.userService.getUserData().then((userData) =&amp;gt; { if (userData !== null) { const now = new Date(); const expires = new Date(userData.expires); if (expires &amp;gt; now) { this.userService.currentUser = userData; this.zone.runOutsideAngular(() =&amp;gt; { this.dataService.initDatabase(userData); }); resolve(true); } else { reject(true); } } else { reject(true); } }); } else { resolve(true); } }); } } We use the basic api routes provided by superlogin for authentication, registration, validating the username and email. The more interesting parts are the logout() and reauthenticate() functions. Because we are synchronizing multiple databases, we have to make sure to destroy all of them with a for loop in the logout() function.
One main feature is, that we want our users to log in automatically if they have previously logged in and they have an unexpired token. Also users should have offline access to the data in the application that syncs when online. The reauthenticate() function checks for a valid token in local storage and does just that. Let&amp;rsquo;s take a look at our user.service. This will be a short one.
import { Injectable } from &amp;#39;@angular/core&amp;#39;; import { Storage } from &amp;#39;@ionic/storage&amp;#39;; @Injectable({ providedIn: &amp;#39;root&amp;#39; }) export class UserService { public currentUser: any = false; constructor(public storage: Storage) {} saveUserData(data): void { this.currentUser = data; this.storage.set(&amp;#39;UserData&amp;#39;, data); } getUserData(): Promise&amp;lt;any&amp;gt; { return this.storage.get(&amp;#39;UserData&amp;#39;); } } We utilize Ionics Storage function to save and retrieve the userData on login and reauthentication.
That&amp;rsquo;s it for the data part. Let&amp;rsquo;s move on to the native apis.
Native Apis The main browser manufactures have proposed new Apis to utilize hardware and platform access such as Contacts and WebNFC. Till then we can use capacitor, the successor to cordova, to add native functionality (you can also use cordova plugins). You can also compile your front end application with just a few commands into a full featured native app: Install and initialize capacitor&amp;hellip;
npm install --save @capacitor/core @capacitor/cli npx cap init &amp;hellip;and then choose your platform.
npx cap add android npx cap add ios npx cap add electron With entkraefter.pro I used the share api. Test it out on your desktop, android or ios device to see beautiful, native share dialogs.
Bonus: Deployment To run node.js applications on your server you should use PM2. For the moment I also use AzureDevOps (i plan to move to a self hosted jenkins solution) to build my front end and server applications on commit and publish them automatically onto my VPS.
Thank you for your time, also check out my Nginx performance tutorial.
]]></content:encoded></item><item><title>Evening the playing field - A new society with Industry 4.0 and IOTA</title><link>https://bytevagabond.com/post/iota-the-new-tcp-ip-for-the-next-industrial-revolution/</link><pubDate>Sat, 11 Jan 2020 21:55:54 UT</pubDate><guid>https://bytevagabond.com/post/iota-the-new-tcp-ip-for-the-next-industrial-revolution/</guid><description>Part 1: What was, will be GDP Growth is declining all around the world. We reached our economic peak 20 years ago. CO2 emissions are changing the water cycle rapidly, which is exponentially shifting the chemistry of our planet. By the end of this century only half of the species on this planet will be left. It is terrifying! So what do we do?
We need a new economic vision for the world and we need it quick.</description><category domain="https://bytevagabond.com/categories/future">Future</category><content:encoded><![CDATA[Part 1: What was, will be GDP Growth is declining all around the world. We reached our economic peak 20 years ago. CO2 emissions are changing the water cycle rapidly, which is exponentially shifting the chemistry of our planet. By the end of this century only half of the species on this planet will be left. It is terrifying! So what do we do?
We need a new economic vision for the world and we need it quick. So lets take a step back and take a look at the economic revolutions we had so far:
To have an industrial revolution you need an innovation in energy, communication and transport which enhance the productivity of our economy rapidly.
1. Industrial Revolution, 19th century England With the discovery of coal as an energy source and the invention of the steam engine, the first industrial revolution changed how people lived enormously. It enabled technologies like the steam powered printing press and trains as a completely new form of communication and transport.
2. Industrial Revolution, 20th century USA The construction of centralized power plants and a public electricity network, the invention of the telephone and finally Henry Fords incentive to put everyone on the road, brought economic productivity to another level again.
3. Industrial Revolution The third industrial revolution, also known in Europe as Industry 4.0 (cause we think the internet is an industrial revolution on its own), is happening right now.
For 25 years we had the digital .com internet as a new form of communication We are also now seeing an internet of renewable energy, where everyone can feed the excess of their solar roof into the public grid The global diffusion of an automated GPS and very soon driverless transport internet is ever growing These three internets ride on top of a platform called the IOT, the Internet of things.
With sensors being implemented in many different types of devices, it is possible to monitor real time activity. These devices can then talk to other machines and to us. Sensors in Agriculture, Factories, Smart homes, Vehicles, Roads and your Smartphone are all collecting data and are then sending this to the communication, transport &amp;amp; energy internets to move and power economic life. This will be omnipresent by 2030, connecting everything with everything. It is like creating an external prosthesis, a distributed nervous system of our economic life.
That&amp;rsquo;s going to allow everyone on this planet to begin engaging directly with each other on a global Internet of things and bypassing a lot of middle man that kept us away from each other for so long. We can have direct engagement now. This is the revolution! It evens the playing field&amp;hellip;
Part 2: IOTA and it&amp;rsquo;s significance in the new Industrial Revolution Let&amp;rsquo;s talk a bit more specific about one aspect of this industrial revolution: The communication protocol that powers the IOT, IOTA.
Distributed Ledger To understand IOTA you have to understand distributed ledgers first. A ledger is a record or list of transactions going from A to B. A distributed ledger is the most up to date version of this record distributed on multiple nodes.
Two very famous distributed ledgers are Bitcoin and Ethereum. They are running on a protocol called the Blockchain. As the name suggests it is a chain of blocks and in each of these blocks is a certain amount of transactions of our distributed ledger. So in the first block are the first ten transactions (this number is just for demonstrating purposes; it varies for each blockchain) that need to be validated by calculating a certain hash value from servers all around the globe.
IOTA is running on a protocol called a DAG, a directed acyclic graph. On the DAG every transaction is connected to others, not contained in a block. The problem IOTA solves is the so called &amp;ldquo;Blockchain Bottleneck&amp;rdquo;. Because there is only a certain amount of transactions that can go into one block of the blockchain, the TPS or &amp;ldquo;transactions per second&amp;rdquo; are limited. So if you have a lot of transactions that need to be processed, the transaction fee rises. The idea of the tangle is actually quite simple:
Each transaction that you send, needs to validate two others first.
This means that the more transactions you have, the more validation is possible. It completely eliminates the need for huge server / mining farms, calculating hash values for the blockchain.
Tangle visualized This is not the only benefit of IOTA compared to conventional blockchains:
Because there is no need for mining farms, there are no transaction fees IOTA is infinitely scalable The nature and hashing algorithm of IOTA is based on ternary, which means it is immune to Quantum computers As transactions do not have to be in consecutive blocks, offline transactions are possible Of course IOTA also has disadvantages:
Validation only happens when others participate in the network, so it needs to be large enough to be self sustainable &amp;ldquo;So far so good, so what exactly does this have to do with a new economic revolution?&amp;rdquo;
IOTA connected World Part 3: The Power of the Machine to Machine Economy In our connected IOTA world your washing machine can pay your neighbors directly for the electricity his solar panels are generating. Thus there is no need for a middleman that takes fees for delivering power to your home, making this process more economically productive. Even your light bulb that might just need a fraction of the power can pay directly for its needs, lets say 0.2 cents, as the feeless nature of IOTA enables microtransactions.
This formula can also be applied in the production and transport of goods any kind. Lets say for example that the car manufacturer VW has a production line in Wolfsburg, Germany, putting together parts from all over the world to make a car. The wheels of this car are being produced in Taiwan. The robot in the wheel factory in Taiwan notices now, that the wheel that is scheduled for a car in Wolfsburg is faulty. So it sends the production line in Germany a quick transaction, notifying the factory that they need to source their wheel from somewhere else. It does this with the help of the IOTA protocol. Because of IOTAs distributed nature, the factories in Taiwan and Germany can trust the transactions and do not have to rely on a third party. The wheel robot can also send the originally paid value straight back to Wolfsburg.
With hundreds of these transactions happening at any given time in just a single factory, the data can be analysed by AIs or smart algorithms. This process is called creating a digital twin of your factory, which simulates the whole production line and adjusts its parameters in real time for optimal output. Different aspects can now be included into the calculation of our parameters, for example co2 emissions, speed and quality. The digital prothesis is enhancing economic productivity in all sorts of industries.
IOTA industry use cases IOTA itself is developed by a registered non-profit foundation in Germany. We barley scratched the surface in this post, but I strongly recommend you to dive into the rabbit hole. I will the best place for resources below:
IOTA Reddit IOTA Discord IOTA Blog ]]></content:encoded></item><item><title>The homeserver saga - Part 2</title><link>https://bytevagabond.com/post/homeserver-saga-part-2/</link><pubDate>Sat, 11 Jan 2020 15:17:10 UT</pubDate><guid>https://bytevagabond.com/post/homeserver-saga-part-2/</guid><description>My father needs a new laptop. And I have one sitting around, that I abuse as a server. A waste of resources and not ideal use case for a laptop. So I took my homeserver, put a ssd inside and gave him a new destiny. My father is happy but I have no cloud anymore 😔. To reserve energy I chose to run the laptop only when I send a wake on lan package to it.</description><category domain="https://bytevagabond.com/categories/projects">Projects</category><content:encoded><![CDATA[My father needs a new laptop. And I have one sitting around, that I abuse as a server. A waste of resources and not ideal use case for a laptop. So I took my homeserver, put a ssd inside and gave him a new destiny. My father is happy but I have no cloud anymore 😔. To reserve energy I chose to run the laptop only when I send a wake on lan package to it. Which is a tedious process if you want to just upload some pictures. My new homeserver has to run full time, be powerful enough to run some docker containers and be energy conservative. I started my research and found the NanoPi M4 which in combination with a SATA hat seems to be an ideal solution. The NanoPi is running on a Rock64 pro which is more powerful than the Raspberry Pi 4. There are different 3D printable NAS cases for the NanoPi with SATA hat. I am using this one, which has a very small form factor, but only supports 2.5&amp;quot; disks. Not ideal for a NAS, but it will work. You can opt to other setups that support 3.5&amp;quot; drives. Just have a look at thingivers.com.
Shopping List High efficiency power supply. 40Watt are plenty enough for this setup. The supply will not get warm even if the NAS is under heavy load. NanoPi M4 – 4GB, SATA HAT for NanoPi M4, NanoPi M4 Heat Sink. Sata Y-splitter cable. 32GB micro SD or bigger – choose your favorite manufacturer. Maybe add a two pin fan or some fancy OLED display to the mix, if you wanna go crazy. Put it all together and flash OpenMediaVault on the card. Lets boot it und start configuring OMV.
Basics &amp;amp; Docker setup The first thing to do in OMV4 is to change the default ports to 90 and 446, which should be done anyway. Create a shared folder named appdata and use SMB/CIFS (Samba) to create a share for your computer. There you can create additional folders. Note user rights! Then create a folder named MariaDB and a folder named NextcloudConfig and a folder named NextcloudData in the folder appdata.
Download the respective docker images in OMV4:
linuxserver/mariadb linuxserver/nextcloud ebspace/armhf-phpmyadmin linuxserver/letsencrypt linuxserver/duckdns All dockers Restart: always
Install MariaDB image Container name: mariadb Bridge Mode: Host Port: 3306 | Exposed Port: 3306 Add environment variables PUID - 1000 (may differ - please check in the terminal with: id USERNAME ) PGID - 100 TZ - Europe/Berlin MYSQL_ROOT_PASSWORD - A root password for the database (remember it forever) Add Volumes and Bind mounts Host Path: /sharedfolders/appdata/MariaDB Container Path: /config Save Install the phpmyadmin image Container name: mariadb Network mode: host Add environment variables PMA_HOST - IP address of your server PMA_PORT - 3306 Save
Create database Open in your browser: https://ip-addresse-of-server/phpmyadmin
Login with:
Username: root Password: The one you should remember forever Then click on NEW at the top left
Create new database - enter database name nextcloud and press Create
Then search the database in the lower field and click &amp;ldquo;Check rights&amp;rdquo;. (Do not click on the database itself, but only on &amp;ldquo;Check rights&amp;rdquo; on the right)
Click on NEW - Add user
Username: a username that you like Host: % Password: A password of your choice Password: repeat the password Scroll down and click OK
Done. phpMyAdmin can be closed and we go back to OMV4-Docker
Install the Nextcloud image: Container name: nextcloud
Bridge Mode: Host Port: 447 Exposed Port: 443 Add environment variables PUID - 1000 PGID - 100 Add Volumes and Bind mounts Host Path: /sharedfolders/appdata/NextcloudConfig
Container Path: /config
Host Path: /sharedfolders/appdata/NextcloudData
Container Path: /data
Save
Open in your browser: https://ip-addresse-of-server:447
Create admin account
Username: admin Password: A password of your choice Data directory `/data` Memory &amp;amp; Database Select `MySQL/MariaDB` Database user: The user name you created earlier at phpmyadmin (not root!) Password: The corresponding password Database name: nextcloud Localhost: ip-address-of-the-server:3306 Start
All done with the docker setup. Don&amp;rsquo;t forget to adjust your MEMORY_MAX_LIMIT in php.ini of the nextcloud config.
Fan service Let&amp;rsquo;s add a small systemd service for our fan and let the machine do it&amp;rsquo;s thing.
#!/bin/bash FANON=false echo 150 &amp;gt; /sys/class/gpio/export # this will create /sys/class/gpio/gpio150 echo out &amp;gt; /sys/class/gpio/gpio150/direction while true; do TEMP=$( cat /sys/class/thermal/thermal_zone0/temp ) if [ $TEMP -gt 55000 ]; then echo 1 &amp;gt; /sys/class/gpio/gpio150/value FANON=true elif [ $TEMP -le 45000 ] &amp;amp;&amp;amp; $FANON; then echo 0 &amp;gt; /sys/class/gpio/gpio150/value FANON=false fi sleep 1 done; ]]></content:encoded></item><item><title>Corebooted Thinkpad x230 with hardware mods</title><link>https://bytevagabond.com/post/corebooted-x230-with-hardware-mods/</link><pubDate>Sat, 11 Jan 2020 14:11:53 UT</pubDate><guid>https://bytevagabond.com/post/corebooted-x230-with-hardware-mods/</guid><description>The why and how Ever since the end of the bush era 2008, every Intel chip comes with an embedded subsystem called the intel management engine. Every i3, i5, i7 and i9 processor has this engine. It is a black-box and has access to every peripheral device like RAM, camera, microphone hard drive, &amp;hellip; even when the main processor is turned off. Some say this is a backdoor for the NSA, some say this is pure paranoia.</description><category domain="https://bytevagabond.com/categories/projects">Projects</category><content:encoded><![CDATA[The why and how Ever since the end of the bush era 2008, every Intel chip comes with an embedded subsystem called the intel management engine. Every i3, i5, i7 and i9 processor has this engine. It is a black-box and has access to every peripheral device like RAM, camera, microphone hard drive, &amp;hellip; even when the main processor is turned off. Some say this is a backdoor for the NSA, some say this is pure paranoia. Anyway I don&amp;rsquo;t want this bloat in my house, thats why I bought a Thinkpad x230 on eBay for 97€. The x230 line is at the moment one off the last models that can be hacked to get rid of the intel management engine. Modern businesses do not have any use for their x230 anymore so you can get them relatively cheap. I used this sniping tool to get it for 97€. This tool waits one second before the auction finishes and bids one euro more than the highest bidder. The x230 is greatly loved by enthusiasts and hackers a like. Thats why their are so many mods available for this beautiful device. For example you can install the keyboard of the x220 on the x230 if you flash a custom bios. The x220 keyboard is a very nice classical keyboard. I am typing right now on it and i feel like god himself kisses my fingers every time I press a key.
Flash custom bios and install the x220 keyboard Update your bios to the latest version lenovo provides and then flash this custom bios. It remaps keys that are positioned differently on the x220 keyboard to work with the x230. Then follow this guide to install the classic keyboard. If you have problems flashing a bios on linux follow this guide.
Coreboot the thinkpad and get rid of the intel management engine I am not gonna say a lot to this. Basically just follow this guide and flash skulls. Releases of skull ar available here. I&amp;rsquo;ll pray for you, if you succeed you will join the ranks of the great men.
Install a custom bios splash screen Git clone this and then run
$ git submodule update --init $ make &amp;amp;&amp;amp; sudo make install After this follow the tutorial in the readme. When you are finished edit /etc/default/grub and append iomem=relaxed to the GRUB_CMDLINE_LINUX_DEFAULT variable so mine now looks like this: GRUB_CMDLINE_LINUX_DEFAULT=&amp;quot;quiet iomem=relaxed&amp;quot;. Finally I ran:
$ sudo grub-mkconfig -o /boot/grub/grub.cfg Either you have now a piece of garbage or a piece of art.
Install atheros Wi-Fi card, IPS display, hard drive and RAM The AR9380 by atheros has great compatibility with linux, huge range and supports monitor mode. This means you do not need an external Wi-Fi adapter anymore to hack your access points. It is a bit pricey but considering the initial price of the laptop, it won&amp;rsquo;t break the bank. You can also upgrade your display to an IPS one for better colors and viewing angles. One upgrade you probably have to do is to buy an ssd and some sweet ram.
Thats it. I hope you did well.
]]></content:encoded></item><item><title>The Homeserver saga - Part 1</title><link>https://bytevagabond.com/post/homeserver-saga-part-1/</link><pubDate>Thu, 09 Jan 2020 10:47:56 UT</pubDate><guid>https://bytevagabond.com/post/homeserver-saga-part-1/</guid><description>It is a hot summer and my brain is to overcooked to program anything&amp;hellip; Suddenly a bright light appears of nowhere and screams at me:
&amp;ldquo;You are a privacy slut! All your files are stored on Google Photos and OneDrive. You are using Gmail and Instagram. DO SOMETHING!&amp;rdquo;.
So I did. I quickly deleted instagram and went from gmail to mailbox.org. But before I could delete all my files from those cloud services I needed an alternative.</description><category domain="https://bytevagabond.com/categories/projects">Projects</category><content:encoded><![CDATA[It is a hot summer and my brain is to overcooked to program anything&amp;hellip; Suddenly a bright light appears of nowhere and screams at me:
&amp;ldquo;You are a privacy slut! All your files are stored on Google Photos and OneDrive. You are using Gmail and Instagram. DO SOMETHING!&amp;rdquo;.
So I did. I quickly deleted instagram and went from gmail to mailbox.org. But before I could delete all my files from those cloud services I needed an alternative. I needed a homeserver, so I thought the next best thing to do was to turn a laptop I had laying around into my personal server. Of course a laptop uses way more energy than needed, so wake on lan seemed to be a good option back then. I installed ubuntu server on my laptop and activated ssh.
Step 1: Disable sleep when lid is closed To turn the screen off but stop the laptop from hibernation when closing the lid, we need to to the following commands:
$ sudo su $ echo &amp;#39;HandleLidSwitch=ignore&amp;#39; | tee --append /etc/systemd/logind.conf $ echo &amp;#39;HandleLidSwitchDocked=ignore&amp;#39; | tee --append /etc/systemd/logind.conf $ sudo service systemd-logind restart Step 2: Install ethtool on the laptop and configure firewall $ sudo apt-get install ethtool Run ifconfig to check your network interface. You lan interface should be eth0.
Then test if your laptop supports wake on lan by running
$ sudo ethtool eth0 If a g is present in the output, it means that your laptop supports wol. So go ahead and enable wol by typing
$ sudo ethtool -s eth0 wol g Get the mac address of the interface and write it down.
$ netstat -ei To stop the firewall from blocking wol packages type
$ sudo ufw allow ssh $ sudo ufw allow 9 $ sudo ufw enable Step 3: Enable etherwake and port-forwarding on router I have an openWrt router, which is an open source linux router. As it runs linux, i can make the machine my slave. I recommend you to get an openWrt router as well. GL.inet has some very affordable and powerful routers.
Install the web-interface for etherwake
$ opkg install luci-app-wol Configure /etc/config/etherwake and set interface to eth1, respectively the lan interface, no wan:
config &amp;#39;etherwake&amp;#39; &amp;#39;setup&amp;#39; option &amp;#39;pathes&amp;#39; &amp;#39;/usr/bin/etherwake /usr/bin/ether-wake&amp;#39; option &amp;#39;sudo&amp;#39; &amp;#39;off&amp;#39; option &amp;#39;interface&amp;#39; &amp;#39;&amp;#39; option &amp;#39;broadcast&amp;#39; &amp;#39;off&amp;#39; Create a config for your laptop with the mac address that you wrote down earlier.
config &amp;#39;target&amp;#39; option &amp;#39;name&amp;#39; &amp;#39;popeye&amp;#39; option &amp;#39;mac&amp;#39; &amp;#39;00:22:33:44:55:66&amp;#39; Test with
/etc/init.d/etherwake start popeye Enable port forwarding on router Edit the firewall config at /etc/config/firewall and enable ssh and wol:
config redirect option name &amp;#39;ssh&amp;#39; option src &amp;#39;wan&amp;#39; option proto &amp;#39;tcpudp&amp;#39; option src_dport &amp;#39;53734&amp;#39; option dest_ip &amp;#39;192.168.8.158&amp;#39; option dest_port &amp;#39;22&amp;#39; option target &amp;#39;DNAT&amp;#39; option dest &amp;#39;lan&amp;#39; config redirect option name &amp;#39;wol&amp;#39; option src &amp;#39;wan&amp;#39; option proto &amp;#39;tcp udp&amp;#39; option src_dport &amp;#39;35122&amp;#39; option dest_ip &amp;#39;192.168.8.158&amp;#39; option dest_port &amp;#39;7&amp;#39; Choose a src_port that is higher than 10000 and NOT in this List, for security reasons.
Step 4: Install &amp;amp; run docker images Install docker on your system and pull the docker-compose setup.
Setup some subdomains leading towards your routers ip and ports (or include a duckDNS / dynDNS docker container in the docker-compose file).
Next change the passwords and subdomains in the db.env and docker-compose.yml file. Run docker-compose up -d. Profit.
Useful docker commands Start Docker containers sudo docker-compose up -d
Show All containers sudo docker ps -a
Kill all Docker Containers: sudo docker kill $(docker ps -q)
Remove all Docker containers sudo docker rm $(docker ps -a -q)
Remove all docker images docker rmi $(docker images -q)
Delete Volumes docker system prune --all --volumes
]]></content:encoded></item></channel></rss>