Git Product home page Git Product logo

deco's People

Contributors

augustocb avatar boywithkeyboard avatar cecilia-marques avatar devartes avatar gabrielmatosboubee avatar guifromrio avatar guitavano avatar hugo-ccabral avatar igorbrasileiro avatar incognitadev avatar itamarrocha avatar lucis avatar lui-dias avatar marialuizaaires avatar matheusgr avatar mcandeia avatar monuelo avatar nathanluiz33 avatar pedroteosousa avatar soutofernando avatar tlgimenes avatar verasthiago avatar victorsenam avatar viktormarinho avatar viniciustrr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deco's Issues

Allow to eject static websites

Is it possible to eject a static website (discarding the CMS capabilities) to be hosted on a CDN, e.g. a landing page?
I couldn't find any reference to that in the documentation.

If it's not possible, that'd be an awesome feature.

Dangling references on resolvables leads into null-ish issues

Description

When deleting an audience directly through UI no validation is performed. Which means that if the audience is referenced somewhere it will be pointing to a null value instead of the audience itself. Which may causes null errors.

How to reproduce

  1. Delete an audience
  2. Check that the routesSelection is broken

Go-to-page (hiting `.` on sites) is not working when running an A/B test and the page selected is the B

Description

When hiting . when visualizing the B variation of an A/B test the . is not working due to inlining the page instead of creating an ID for the page itself.

How to reproduce

  1. Create an AB Test on Editor@v1 by Brazilian
  2. Select the B Variant and hit the . key

Suggestion on how to solve:

This can be resolved by not using an audience (which is not the ideal use case actually) to select pages. Instead a new experiment should be created on live that select between two different blocks based on a percentage.

DenoDoc performance - Logbook

Logbook of Task: Dynamic Schema Generation and DecoHub Implementation

Task Overview:
The task at hand involves improving the dynamic schema generation process, particularly for supporting the DecoHub feature, which allows users to extend the components library with community-built components/blocks without requiring a redeploy. The current schema generation relies on deno doc during development time, but it poses limitations due to Deno Deploy's restrictions on syscalls. Various alternatives have been explored, and this logbook will document those attempts, along with their advantages, disadvantages, and future plans.

Current Schema Generation:
Currently, we generate a schema.gen.json file during development time. This is done because:

  1. Schema generation relies on deno doc, which generates a TypeScript "AST" with comments used for parsing code and generating the schema.
  2. Deno Deploy lacks syscall support, preventing the use of Deno CLI commands, and necessitates the use of the slower denodoc WASM (compiled from Rust) for schema generation.

Goals and Their Impacts:
There are multiple goals related to dynamic schema generation, but the most significant ones are:

  1. Supporting Dynamic Imports: Allowing users to extend the components library with community-built components without requiring an initial component. It is the foundation for DecoHub, where devs can publish code, and users can install them without a redeploy.
  2. Improving Dev Experience: Enhancing the development experience by addressing schema generation issues and reducing its time consumption.

Alternative Approaches Explored:

1. Switch to WASM with Deno KV Cache:
Advantages:

  • No changes in dev mode.
  • No need for additional infrastructure.

Disadvantages:

  • Extremely slow due to no caching mechanism.
  • No "single flight" capability for schema generation, causing multiple isolates to start generating the schema simultaneously.

2. Shared DenoDoc Server (Rust, gRPC/WebSockets):
Advantages:

  • Shared cache with Content Addressable Storage to avoid redundant denodoc requests.
  • Usable in other languages, enabling multi-threading.

Disadvantages:

  • Deno doesn't natively support gRPC, leading to challenges.
  • Implementing a DenoDoc cache in Rust required reimplementation efforts.
  • Multi-threading issues in the denodoc crate hindered progress.

3. Deno-based Implementation of Approach 2:
Advantages:

  • Same approach as the Rust server but implemented in Deno.

Disadvantages:

  • Still slow and memory-intensive due to Deno's single-threaded nature.

4. Go-based Implementation of Approach 2:
Advantages:

  • Familiarity with Go allowed successful implementation.

Disadvantages:

  • Still slow due to the nature of deno doc time consumption

Revised Approach: Enabling Deco Hub:
Considering the challenges faced in previous attempts, the focus will be on enabling Deco Hub. Instead of generating the entire JSON schema, we will generate the denodoc cache and save it as a compressed ZSTD file to minimize size.

Advantages:

  • Faster version due to only generating the difference between the current cache and the "published lib."
  • No additional infrastructure required.

Disadvantages:

  • Dev experience still not optimal.
  • Runtime generation still needed.

Future Plans:
In planning for the future, a potential solution to address the dynamic schema generation and Deco Hub implementation challenges is to set up a separate infrastructure dedicated to generating the schemas. This infrastructure could handle a smaller portion of the overall traffic, perhaps around 10%. By doing so, we can alleviate the performance impact on the main production system and focus on optimizing schema generation on this specialized infrastructure.

To implement this approach, we could deploy the schema generation service within a Kubernetes cluster with appropriate resource allocation and scaling capabilities. To ensure consistency and reduce redundant calculations, we can leverage session stickiness to direct requests to the same "server" within the cluster, allowing the cache to remain fresh and reusable. This stickiness will enable us to take advantage of the cached schema data efficiently while minimizing redundant computations.

This dedicated schema generation infrastructure would provide a controlled environment, allowing us to experiment with different caching mechanisms, optimizations, and multi-threading techniques without impacting the primary production environment. We can continuously fine-tune the schema generation process to achieve maximum efficiency, reduced latency, and an overall improved development experience.

Unable to run the locally created site

Steps I have done to create and run new site locally

  1. Cloned the repo to my file system
  2. Executed the following command:
deno run scripts/init.ts
  1. Followed the instructions and created new site.
  2. Got into site folder and executed the command:
deno task play

Its showing the site available on: https://localhost:8000 , but when I am trying to access the site, it shows me the error, I am attaching the screenshot of it.
Note: I am using windows 11 machine.

image

Use websocket for preview pages

Description

Currently the /live/preview route is one-to-one in terms of renderization, which means that each request that hits this endpoint is asking for a single page preview.

Because of that we use it with caution, trying to reduce the number of requests targeting such endpoint. For previews it should be better to create a single endpoint that receive websocket message and returns HTML over the wire, this should reduce the overall deno deploy costs because of it does not requires multiple connections/requests for previewing sections/pages or other blocks.

Generally speaking this might be useful for the admin when displaying the Visual Library blocks, which currently breaks when a single section can't be rendered, operating under a websocket channel would allow us to break into multiple render requests to the websocket endpoint avoiding breaking the entire visual library when a single component is failing on render/preview.

Can't create new deco play project

On play.deco.cx the 3rd step of the manual is telling me to create a new project

Create a new project locally
deno run -Ar https://deco.cx/start

however, when running the command the following error occurs:

felix@mac ~ % deno run -Ar https://deco.cx/start
Warning Implicitly using latest version (0.209.0) for https://deno.land/std/encoding/base64.ts
error: Relative import path "std/path/mod.ts" not prefixed with / or ./ or ../
    at https://raw.githubusercontent.com/deco-cx/deco/main/engine/releases/fs.ts:2:22

This is because you have "std/" in your import_map.json but the way you deploy the start script, the import map isn't being used. You can easily fix this replacing the first two lines in your engine/releases/fs.ts with the full links of your dependencies.

import { debounce } from "std/async/debounce.ts";

[Proposal] Headless configuration API

Title: Headless configuration API

Author: Marcos Candeia (@mcandeia)
Status: Discussion

Overview

The current architecture of live.ts is based on two main actors: the Admin and the Tenant Site, which are separate deployments. The Admin UI is responsible for writing into the configuration database, while the Tenant Site handles the read operations. However, this approach can lead to inconsistencies and limitations when dealing with configuration changes (releases). This proposal aims to address these challenges by introducing a new approach to handle configuration changes in live.ts.

Below you can see an oversimplified version of the current architecture.

image

Background

In the current implementation, the Admin UI is responsible for writing configuration changes into the database, and releases are published as encapsulated blobs of configurations distributed across site deployments. However, relying solely on the Admin for writing and the Site for reading can lead to inconsistencies and limitations in managing configuration changes.

Problem 1: Inconsistency in Data Store Providers

The Admin UI manages multiple sites, making it challenging to handle different data store providers for each site. Storing this information within the Site deployment offers more flexibility and autonomy for each site to define its own data store infrastructure.

Problem 2: Validation and Open Source Flexibility

Allowing the Site deployment to validate against the current JSONSchema state offers greater control and flexibility. It also enables the possibility of creating a fully open-source version where the database can be switched to the user's file system, providing alternative storage options.

Problem 3: Authorization and Cross-Deployment Access

Enabling the Site deployment to have specific authorization keys allows for more secure and granular control over data storage. For example, in the case of running on Deno, Deno KV can be inaccessible for cross read/write operations between the Admin deployment and the Site Deno KV deployment.

Detailed Design

To address the challenges mentioned above, this proposal suggests the following changes to live.ts:

  • Authorized Write API: Introduce an authorized Write API that can be called by the Admin deployment. This API will allow the Admin to send configuration changes to the Site deployment securely.

  • Admin Authentication: The Admin deployment will be responsible for signing the requests sent to Live.ts. It will expose a public key that must be used to validate the signature of the requests, ensuring that they originate from the Admin deployment. This provides a mechanism for authentication and ensures the integrity of the configuration changes.

  • Key Rotation: Implement a mechanism to easily rotate the authentication key used by the Admin deployment. This will enhance security and allow for key management practices such as key revocation or key updates.

  • Trusted Public Keys: Enable the Site deployment to add multiple trusted public keys. This will provide flexibility in managing authentication and allow for multiple Admin deployments to interact with the Site securely.

  • Sites private keys: Sites should handle any necessary authorization to read/write the target Storage Provider. For instance, when dealing with supabase sites should have a key with RLS access to its own data.

Below you can see the new architecture after the suggested changes
image

Completion Checklist

The following tasks need to be completed to implement the proposed changes:

  • Design and implement the Authorized Write API endpoint.
  • Develop the Admin Authentication mechanism, including key generation and signature validation.
  • Implement key rotation functionality for the Admin deployment.
  • Enhance the Site deployment to support multiple trusted public keys.
  • Perform thorough testing and security audits to ensure the integrity and robustness of the solution.

Please share your thoughts, concerns, and suggestions to drive this proposal forward. Together, let's enhance live.ts to provide an even more reliable and flexible web framework.

[Proposal] Inline loaders

Inline loaders

Author: Marcos Candeia (@mcandeia)
State: Ready for Implementation

Overview

Loaders are a powerful way to fetch data from APIs. Currently, loaders are just ordinary functions that have access to the request and receive configuration data as well as sections and live (either) inside the functions folder (or loaders soon). They were created for a single purpose: to create a clear separation between where the data comes from and what the component data shape is. We can take the current implementation of Fashion's ProductShelf as an example:

export interface Props {
  title: string;
  products: Product[] | null;
  itemsPerPage?: number;
}

function ProductShelf({
  title,
  products,
}: Props) {
// [redacted code]
}

Note that the ProductShelf itself does not know where the Product[] comes from. The interesting part here is that the ProductShelf's Product[] would come from different APIs and even different e-commerce platforms, such as VTEX or Shopify.

Because of the nature of these components, where they define the shape of the data creating this clever separation, we're going to call them Universal Components. Universal Components are components that do not depend on any specific API; instead, they depend on the shape of the data. In fact, there are at least four different implementations for the Product[] that you can find in the std package. They are: VNDAProductList, VTEXProductList, VTEXLegacyProductList, and ShopifyProductList. This is only possible because we have a common ground type from schema.org named Product, which is declared once and imported from the implementer modules.

Notice that this is only possible because we have inverted the dependency order. Instead of the component depending on the data loader, the data loader depends on the component data shape, and so it can fetch data and convert it to the common ground type. In the case of the ProductShelf, the common ground type is the schema.org Product. However, it could be any ordinary TypeScript type that sections define or depend on. More importantly, you can have different packages implementing loaders that are unknown from the section's point of view, but they are suitable to be used as a Product[] loader (a.k.a. "interchangeable"), meaning that Universal Components are extensible by default.

Below you can see an explanation of how it looks in terms of imports:
image

This proposal suggests adding a new type of loader: the inline loader. They are supposed to be used in scenarios where interchangeability is not paramount (e.g., 80% of landing pages). They are not meant to replace the current loader implementation. Instead, they should be used as a complement. In fact, there are scenarios where they can be used together.

Background

Start simple and add new abstractions gradually

Even considering that universal components are powerful abstractions we need to take in consideration that we should have a way to start simpler and gradually adding abstractions as they are necessary. Currently, we are enforcing developers to create loaders separated from sections, which, generally makes the developer loses the focus on the code to go to the Editor admin to see it in action, see the current data-fetching documentation and notice that there are parts which the admin is required even for the simpler use case of fetching data.

Code localization

By allowing developers to fetching data inside the same place where UI components are created is great for DevX, often called "Colocalization" (sorry, in portuguese it sounds slighly better "Colocalização"), since they don't need to open multiples files/directory to understand how its code connects all together, requiring less cognitive effort to understand the "big picture".

People are trying to use loaders as a function (trying to invoke them)

This is pure empiricism but it's not so unusual to see people asking how to use the loader X or Y inside a section, trying to call the function itself.

Detailed Design

Inline loaders are just ordinary loaders but written within the same section (or island) file, they are invoked first, and when chosen, the section props are not used, instead the loader props will be used to be fulfilled in the Editor Admin, which means that the developers should declare type which contains properties that will be used by the loader itself and also "passing properties" which are those that will be used only for the target section/component. To show how it will look like, let's rewrite the entire data-fetching documentation by removing the loader and using the inline loader.

Important note: Loaders and inline loaders has the same signature on purpose to make possible refact the code as easy as just copying and pasting the loader code inside the loaders folder.

The final code implementation for the DogFacts tutorial is the following;

a dogApiFacts.ts file inside /loaders folder with the following content:

import type { LoaderContext } from "$live/types.ts";

// Return type of this loader
export interface DogFact {
  fact: string;
}

// Props type that will be configured in deco.cx's Admin
export interface Props {
  numberOfFacts?: number;
}

export default async function dogApiFacts(
  _req: Request,
  { state: { $live: { numberOfFacts } } }: LoaderContext<Props>,
): Promise<DogFact[]> {
  const { facts } = (await fetch(
    `https://dogapi.dog/api/facts?number=${numberOfFacts ?? 1}`,
  ).then((r) => r.json())) as { facts: string[] };
  return facts.map((fact) => ({ fact }));
}

A section named DogFacts.tsx inside /sections folder with the following content:

import { DogFact } from "../functions/dogApiFacts.ts";

export interface Props {
  title: string;
  dogFacts: DogFact[];
}

export default function DogFacts({ title, dogFacts }: Props) {
  return (
    <div class="p-4">
      <h1 class="font-bold">{title}</h1>
      <ul>
        {dogFacts.map(({ fact }) => <li>{fact}</li>)}
      </ul>
    </div>
  );
}

To see it in action you need to go to the admin and configure the section by selecting the created loader and also fulfill the section props, which is not too complex, but requires at least two files just because we decided to fetch data.

The dogFacts scenario is a perfect fit where you can just fetch from dogFacts APIs without worrying about being interchangeable and it is not so common to have a replacement for it in the short term (specially for a tutorial).

So let's rewrite them;

  1. Remove the dogApiFacts.ts from /loaders folder
  2. Rewrite the DogFacts.tsx section to
import type { LoaderContext } from "$live/types.ts";
import type { SectionProps } from "$live/mod.ts";

export default function DogFacts(
  { title, dogFacts }: SectionProps<typeof loader>,
) {
  return (
    <div class="p-4">
      <h1 class="font-bold">{title}</h1>
      <ul>
        {dogFacts.map((fact) => <li>{fact}</li>)}
      </ul>
    </div>
  );
}

// Props type that will be configured in deco.cx's Admin
export interface LoaderProps {
  title: string;
  numberOfFacts?: number;
}

export async function loader(
  _req: Request,
  { state: { $live: { numberOfFacts, title } } }: LoaderContext<LoaderProps>,
) {
  const { facts: dogFacts } = (await fetch(
    `https://dogapi.dog/api/facts?number=${numberOfFacts ?? 1}`,
  ).then((r) => r.json())) as { facts: string[] };
  return { dogFacts, title };
}

Notice a few things:

  1. The section props will not be used in the Editor Admin anymore, instead, the loader props will be used.
  2. As previously mentioned, the LoaderProps should contain any information that is needed to Fetch data and passing properties to the target section, so it should receive title and pass to the target section even when title is not being used directly.

To solve scenarios like this I'm also proposing an advanced usage of loader called PropsLoader which is basically a way to define a loader for a single property (or selected/multiple) and let the other properties being passed automatically by the framework. Let’s rewrite the previous example again;

import { PropsLoader } from "$live/mod.ts";
import type { LoaderContext } from "$live/types.ts";

// Props type that will be configured in deco.cx's Admin
export interface LoadProps {
  title: string;
  numberOfFacts?: number;
}

async function dogFacts(
  _req: Request,
  { state: { $live: { numberOfFacts } } }: LoaderContext<LoadProps>,
): Promise<string[]> {
  const { facts } = (await fetch(
    `https://dogapi.dog/api/facts?number=${numberOfFacts ?? 1}`,
  ).then((r) => r.json())) as { facts: string[] };
  return facts;
}

export interface Props {
  title: string;
  dogFacts: string[];
}

export default function DogFacts({ title, dogFacts }: Props) {
  return (
    <div class="p-4">
      <h1 class="font-bold">{title}</h1>
      <ul>
        {dogFacts.map((fact) => <li>{fact}</li>)}
      </ul>
    </div>
  );
}

export const loader: PropsLoader<
  Props,
  LoadProps
> = {
  dogFacts,
};

The code is very similar, but you can notice that

  1. The loader is simpler and can be extracted more easily because the signature (input/output) is exactly what is expected from the current tutorial implementation
  2. The other props can be just omitted and will be passed by the framework

I believe that this second option would also be used when multiples fetches must be invoked and the framework handles it in parallel instead of delegating this to the dev.

Expectations and alternatives

Async components

Async components were an alternative simpler in terms of usability because they will let developers to make fetch calls inside component once (at startup) and use the data in a closure function, see the example below;

// Props type that will be configured in deco.cx's Admin
export interface Props {
  title: string;
  numberOfFacts?: number;
}

export default async function DogFacts(
  { title, numberOfFacts }: Props,
) {
  const { facts } = (await fetch(
    `https://dogapi.dog/api/facts?number=${numberOfFacts ?? 1}`,
  ).then((r) => r.json())) as { facts: string[] };
  return () => (
    <div class="p-4">
      <h1 class="font-bold">{title}</h1>
      <ul>
        {facts.map((fact) => <li>{fact}</li>)}
      </ul>
    </div>
  );
}

What I didn't like in this approach is the fact that is needed to wrap the return inside a parameterless function () => which will probably lead in many hard-debugging bugs, because if the dev forgot to add the closure function it will work without any fresh hook, causing a lot of confusion, and it's not trivial and elegant to return a closure function, so I decided (together with the team) to not consider this possibility.

Completion checklist

[ ] Change section block to allow loader function
[ ] Update documentation

[Proposal] Extension block

Extension Blocks

Author: Marcos Candeia (@mcandeia)
State: Discussion

Overview

When using loaders for fetching data from APIs, it is common to need to add (or change) new fields to existing returns. This can be a challenging task when working with imported loaders, such as the ones that returns Product from schema.org. One feasible solution would be forking the loader source code and apply the necessary modifications, the drawback is that now you need to give up from getting automatic updates from the loader's creators. Another solution would be just import the loader and add your new fields, which makes you to be aware and replicate it for every new loader that is implemented, so let's say you have 10 loaders that returns Products, so now you have to import/export all of them. Let's take the as an example a real-world use case of adding the reviews of a product (the number of "stars" of a given product).

This task requires changing the aggregateRating property of Product types, please notice the following conditions;

  1. It can't be enable by default because it requires configuration (e.g secrets that will be used to fetch such information)
  2. For this scenario we want to extend an existing type without modifying the source code, this should be possible especially because the developer who's responsible for adding the ratings feature can have no access to the source code repository that own the loader source code.
  3. Not every product needs such information and sometimes it is tied to the platform, so the way you get product ratings can vary across loaders at the same site.

One solution to this problem is the use of extension blocks, which allow developers to add new fields to existing types without modifying the source code. Extension blocks provide a way to "extend" types with additional functionality, without having to modify the original source of data.

Background

There are multiple challenges when extending existing types, including;

Codebase fragmentation

When a new property needs to be added or modified, the codebase may become fragmented as different parts of the application may be affected. This can lead to increased complexity and difficulty in maintaining the codebase.

Dependency management

If the loaders are dependent on other sites, a change to the property may require updating those dependencies as well. This can lead to conflicts with other parts of the application that depend on different versions of the same site.

Testing and validation

Any changes to the property require testing and validation to ensure that they do not introduce bugs or unintended behavior. This can be time-consuming and expensive, especially if the changes affect critical parts of the application.

Documentation and communication

When a new property is added, it is important to update the documentation and communicate the changes to other developers who may be affected. This can be challenging if there are multiple loaders or if the changes are complex.

Maintainability and backward compatibility

Finally, any changes to the property need to be done in a way that maintains backward compatibility and does not break existing code. This can be difficult if the property is deeply integrated into the codebase or if there are many dependent modules.

Detailed design

Extension blocks are implemented using a simple and effective design pattern. The basic idea is to provide a modular way to extend existing code without modifying the source code itself. The implementation is quite straightforward, and it involves a few simple steps.

Creating the Product extension

First, the developer creates a function that takes the original type and returns an extended type. This function is referred to as an extension block. The extension block can be used to add new properties or methods to the original type.

The following code is the code that would be used to add the aggregateRatings into an existing product instance.

import { Product } from "deco-sites/std/commerce/types.ts";
import {
  ConfigYourViews,
  RatingFetcher,
} from "deco-sites/std/commerce/yourViews/client.ts";
import { ExtensionOf } from "https://denopkg.com/deco-cx/live@3c5ca2344ff1d8168085a3d5685c57100e6bdedb/blocks/extension.ts";
import { createClient } from "../commerce/yourViews/client.ts";

export type Props = ConfigYourViews;

const aggregateRatingFor =
  (fetcher: RatingFetcher) => async ({ isVariantOf }: Product) => {
    const productId = isVariantOf!.productGroupID;
    const rating = await fetcher(productId);

    return rating
      ? {
        "@type": "AggregateRating" as const,
        ratingCount: rating.TotalRatings,
        ratingValue: rating.Rating,
      }
      : undefined;
  };

export default function AddYourViews(config: Props): ExtensionOf<Product> {
  const client = createClient(config);
  const aggregateRating = aggregateRatingFor(client.rating.bind(client));

  return {
    aggregateRating,
  };
}

This code should live within the extensions/ folder with an arbitrary name. The format of an extension is basically the same fields as the product that we want to extend by instead of returning the values directly developers can fetch data from APIs for every field that needs to be modified/added. Also, the extension function is a function that has the following signature

export type ExtFunc<
  T,
  TBase,
  IsParentOptional,
  PropIsOptional = IsParentOptional,
> = (
  arg: TBase,
  current: IsParentOptional extends true ? T | undefined : T,
) => PromiseOrValue<
  PropIsOptional extends false ? DeepPartial<T> : DeepPartial<T> | undefined
>;

Where:

  1. T is the current property value, for instance in the case of aggregateRating is gonna be the current aggregateRating value.
  2. TBase is the entire target object, in this case the entire Product
  3. The return is a DeepPartial<T> which means that the result will be merged with the original object.

Optionally, when dealing with collections that should be changed as a whole (a new property should be added or changed on each element) a simple property name _forEach is allowed to provide a function that will be used for each element.

The example below show how to add +10 on every price inside the offers array (yes, the type Product has offers.offers proerty (the latter is an array and the former an object)).

export default function Add10Price(): ExtensionOf<Product> {
  return {
    offers: {
      offers: {
        _forEach: {
          price: (p: Product, curr: number) => curr + 10,
        },
      },
    },
  };

The WithExtensions loader

A new loader is being added alongside the extensions block, the WithExtensions loader, which has basically a single task: get data (products in this case) from loaders and apply the configured extensions transformations. This is a simple loader that can be used on any field that accepts a loader, and it has basically two properties: The data and the extension, the WithExtensions loader is used as a middle-man to get data from loaders and apply the transformations in parallel, merging them together.

This is the proposed implementation for such loader.

export interface Props<T> {
  data: T;
  extension: Extension<T>;
}

export default async function withExtensions(
  _req: Request,
  ctx: LoaderContext<Props>,
) {
  const extended = await ctx.state.$live.extension?.(ctx.state.$live.data);
  return extended?.merged(); // this return the extension applied to the target object
}

Composite extension

As you can see in the previous example the loader contains only one extension property and not an array of them. This is only for simplicity to avoid code duplication when dealing with multiple extensions, for that, I propose to have a Composite extension that receive an array of extensions and compose them together as a single one, which makes really easy to allow extensions on other blocks in the future. You can see the proposed code below

import { Extended, Extension } from "$live/blocks/extension.ts";
import { notUndefined } from "$live/engine/core/utils.ts";
import { deepMergeArr } from "$live/utils/object.ts";
import { DeepPartial } from "https://esm.sh/v114/utility-types";

export interface Props {
  extensions: Extension[];
}
const apply = <T, R>(param: T) => (f: (arg: T) => Promise<R>) => f(param);

export default function composite({ extensions }: Props) {
  return async <TData>(data: TData) => {
    const applied = (await Promise.all(
      extensions?.filter(notUndefined).map(
        apply(data),
      ),
    )) as Extended<TData>[];
    return applied.reduce(
      (finalObj, extended) =>
        deepMergeArr<DeepPartial<TData>>(
          finalObj,
          extended.value,
        ),
      {},
    );
  };
}

One key advantage of this approach is that it allows for composability of extensions. Since each extension block creates a separate instance of the extended type, multiple extension blocks can be combined to create even more complex objects. This makes it easy to add new functionality to existing code without modifying the original source.

Overall, extension blocks are a powerful tool for developers looking to extend existing code in a modular and composable way. By allowing for easy extension of existing types and objects, extension blocks help to improve code maintainability and reduce the need for code duplication.

Important to mention that only one task for each persona is required,

For developers who want to extend existing types:

  1. A new extension needs to be added, thus a new extension module inside extensions/ folder.

For business users:

  1. Configure the new extension on existing loaders.

Expectations and alternatives

Completion checklist

[ ] Create the extension block
[ ] Update documentation

Redesign Flags concepts

Description

Currently flags are used as a non-opaque way see this example, audiences are just flags but the user, in this case, routesSelection knows that this is an audience. Ideally flags should be opaque, in the sense of the RoutesSelection should receive the result of routes and overrides and apply only the merge logic.

Design

Flags should return whatever is necessary to be used, flags should evaluate itself and returns its true or false value.

This allow flags to be used everywhere when a block is requested because you can swap-out any Block to a Flag that returns a block instead.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.