deco-cx / deco Goto Github PK
View Code? Open in Web Editor NEWGit-based Visual CMS for Deno, </> htmx and Tailwind apps. Deploy on any Deno-compatible host.
Home Page: https://deco.cx
License: Apache License 2.0
Git-based Visual CMS for Deno, </> htmx and Tailwind apps. Deploy on any Deno-compatible host.
Home Page: https://deco.cx
License: Apache License 2.0
Is it possible to eject a static website (discarding the CMS capabilities) to be hosted on a CDN, e.g. a landing page?
I couldn't find any reference to that in the documentation.
If it's not possible, that'd be an awesome feature.
When deleting an audience directly through UI no validation is performed. Which means that if the audience is referenced somewhere it will be pointing to a null
value instead of the audience itself. Which may causes null
errors.
When hiting .
when visualizing the B variation of an A/B test the .
is not working due to inlining the page instead of creating an ID for the page itself.
.
keySuggestion on how to solve:
This can be resolved by not using an audience (which is not the ideal use case actually) to select pages. Instead a new experiment should be created on live that select between two different blocks based on a percentage.
Because we depend on siteId in our events it's not possible to completely remove the site id reference from our pages, which also makes impossible to remove the siteId configuration
cc: @matheusgr
Logbook of Task: Dynamic Schema Generation and DecoHub Implementation
Task Overview:
The task at hand involves improving the dynamic schema generation process, particularly for supporting the DecoHub feature, which allows users to extend the components library with community-built components/blocks without requiring a redeploy. The current schema generation relies on deno doc
during development time, but it poses limitations due to Deno Deploy's restrictions on syscalls. Various alternatives have been explored, and this logbook will document those attempts, along with their advantages, disadvantages, and future plans.
Current Schema Generation:
Currently, we generate a schema.gen.json
file during development time. This is done because:
deno doc
, which generates a TypeScript "AST" with comments used for parsing code and generating the schema.Goals and Their Impacts:
There are multiple goals related to dynamic schema generation, but the most significant ones are:
Alternative Approaches Explored:
1. Switch to WASM with Deno KV Cache:
Advantages:
Disadvantages:
2. Shared DenoDoc Server (Rust, gRPC/WebSockets):
Advantages:
Disadvantages:
3. Deno-based Implementation of Approach 2:
Advantages:
Disadvantages:
4. Go-based Implementation of Approach 2:
Advantages:
Disadvantages:
Revised Approach: Enabling Deco Hub:
Considering the challenges faced in previous attempts, the focus will be on enabling Deco Hub. Instead of generating the entire JSON schema, we will generate the denodoc cache and save it as a compressed ZSTD file to minimize size.
Advantages:
Disadvantages:
Future Plans:
In planning for the future, a potential solution to address the dynamic schema generation and Deco Hub implementation challenges is to set up a separate infrastructure dedicated to generating the schemas. This infrastructure could handle a smaller portion of the overall traffic, perhaps around 10%. By doing so, we can alleviate the performance impact on the main production system and focus on optimizing schema generation on this specialized infrastructure.
To implement this approach, we could deploy the schema generation service within a Kubernetes cluster with appropriate resource allocation and scaling capabilities. To ensure consistency and reduce redundant calculations, we can leverage session stickiness to direct requests to the same "server" within the cluster, allowing the cache to remain fresh and reusable. This stickiness will enable us to take advantage of the cached schema data efficiently while minimizing redundant computations.
This dedicated schema generation infrastructure would provide a controlled environment, allowing us to experiment with different caching mechanisms, optimizations, and multi-threading techniques without impacting the primary production environment. We can continuously fine-tune the schema generation process to achieve maximum efficiency, reduced latency, and an overall improved development experience.
deno run scripts/init.ts
deno task play
Its showing the site available on: https://localhost:8000
, but when I am trying to access the site, it shows me the error, I am attaching the screenshot of it.
Note: I am using windows 11 machine.
Currently the /live/preview route is one-to-one in terms of renderization, which means that each request that hits this endpoint is asking for a single page preview.
Because of that we use it with caution, trying to reduce the number of requests targeting such endpoint. For previews it should be better to create a single endpoint that receive websocket message and returns HTML over the wire, this should reduce the overall deno deploy costs because of it does not requires multiple connections/requests for previewing sections/pages or other blocks.
Generally speaking this might be useful for the admin when displaying the Visual Library blocks, which currently breaks when a single section can't be rendered, operating under a websocket channel would allow us to break into multiple render requests to the websocket endpoint avoiding breaking the entire visual library when a single component is failing on render/preview.
On play.deco.cx the 3rd step of the manual is telling me to create a new project
Create a new project locally
deno run -Ar https://deco.cx/start
however, when running the command the following error occurs:
felix@mac ~ % deno run -Ar https://deco.cx/start
Warning Implicitly using latest version (0.209.0) for https://deno.land/std/encoding/base64.ts
error: Relative import path "std/path/mod.ts" not prefixed with / or ./ or ../
at https://raw.githubusercontent.com/deco-cx/deco/main/engine/releases/fs.ts:2:22
This is because you have "std/"
in your import_map.json
but the way you deploy the start script, the import map isn't being used. You can easily fix this replacing the first two lines in your engine/releases/fs.ts
with the full links of your dependencies.
Line 1 in 990bc79
Author: Marcos Candeia (@mcandeia)
Status: Discussion
The current architecture of live.ts is based on two main actors: the Admin and the Tenant Site, which are separate deployments. The Admin UI is responsible for writing into the configuration database, while the Tenant Site handles the read operations. However, this approach can lead to inconsistencies and limitations when dealing with configuration changes (releases). This proposal aims to address these challenges by introducing a new approach to handle configuration changes in live.ts.
Below you can see an oversimplified version of the current architecture.
In the current implementation, the Admin UI is responsible for writing configuration changes into the database, and releases are published as encapsulated blobs of configurations distributed across site deployments. However, relying solely on the Admin for writing and the Site for reading can lead to inconsistencies and limitations in managing configuration changes.
The Admin UI manages multiple sites, making it challenging to handle different data store providers for each site. Storing this information within the Site deployment offers more flexibility and autonomy for each site to define its own data store infrastructure.
Allowing the Site deployment to validate against the current JSONSchema state offers greater control and flexibility. It also enables the possibility of creating a fully open-source version where the database can be switched to the user's file system, providing alternative storage options.
Enabling the Site deployment to have specific authorization keys allows for more secure and granular control over data storage. For example, in the case of running on Deno, Deno KV can be inaccessible for cross read/write operations between the Admin deployment and the Site Deno KV deployment.
To address the challenges mentioned above, this proposal suggests the following changes to live.ts:
Authorized Write API: Introduce an authorized Write API that can be called by the Admin deployment. This API will allow the Admin to send configuration changes to the Site deployment securely.
Admin Authentication: The Admin deployment will be responsible for signing the requests sent to Live.ts. It will expose a public key that must be used to validate the signature of the requests, ensuring that they originate from the Admin deployment. This provides a mechanism for authentication and ensures the integrity of the configuration changes.
Key Rotation: Implement a mechanism to easily rotate the authentication key used by the Admin deployment. This will enhance security and allow for key management practices such as key revocation or key updates.
Trusted Public Keys: Enable the Site deployment to add multiple trusted public keys. This will provide flexibility in managing authentication and allow for multiple Admin deployments to interact with the Site securely.
Sites private keys: Sites should handle any necessary authorization to read/write the target Storage Provider. For instance, when dealing with supabase sites should have a key with RLS access to its own data.
Below you can see the new architecture after the suggested changes
The following tasks need to be completed to implement the proposed changes:
Please share your thoughts, concerns, and suggestions to drive this proposal forward. Together, let's enhance live.ts to provide an even more reliable and flexible web framework.
The dev code is lacking a check for changes in tailwind config. The process restarts, but deco.gen.ts
is not re-generated.
Author: Marcos Candeia (@mcandeia)
State: Ready for Implementation
Loaders are a powerful way to fetch data from APIs. Currently, loaders are just ordinary functions that have access to the request and receive configuration data as well as sections and live (either) inside the functions
folder (or loaders
soon). They were created for a single purpose: to create a clear separation between where the data comes from and what the component data shape is. We can take the current implementation of Fashion's ProductShelf as an example:
export interface Props {
title: string;
products: Product[] | null;
itemsPerPage?: number;
}
function ProductShelf({
title,
products,
}: Props) {
// [redacted code]
}
Note that the ProductShelf itself does not know where the Product[] comes from. The interesting part here is that the ProductShelf's Product[] would come from different APIs and even different e-commerce platforms, such as VTEX or Shopify.
Because of the nature of these components, where they define the shape of the data creating this clever separation, we're going to call them Universal Components. Universal Components are components that do not depend on any specific API; instead, they depend on the shape of the data. In fact, there are at least four different implementations for the Product[] that you can find in the std package. They are: VNDAProductList, VTEXProductList, VTEXLegacyProductList, and ShopifyProductList. This is only possible because we have a common ground type from schema.org named Product, which is declared once and imported from the implementer modules.
Notice that this is only possible because we have inverted the dependency order. Instead of the component depending on the data loader, the data loader depends on the component data shape, and so it can fetch data and convert it to the common ground type. In the case of the ProductShelf, the common ground type is the schema.org Product. However, it could be any ordinary TypeScript type that sections define or depend on. More importantly, you can have different packages implementing loaders that are unknown from the section's point of view, but they are suitable to be used as a Product[] loader (a.k.a. "interchangeable"), meaning that Universal Components are extensible by default.
Below you can see an explanation of how it looks in terms of imports:
This proposal suggests adding a new type of loader: the inline loader. They are supposed to be used in scenarios where interchangeability is not paramount (e.g., 80% of landing pages). They are not meant to replace the current loader implementation. Instead, they should be used as a complement. In fact, there are scenarios where they can be used together.
Even considering that universal components are powerful abstractions we need to take in consideration that we should have a way to start simpler and gradually adding abstractions as they are necessary. Currently, we are enforcing developers to create loaders separated from sections, which, generally makes the developer loses the focus on the code to go to the Editor admin to see it in action, see the current data-fetching documentation and notice that there are parts which the admin is required even for the simpler use case of fetching data.
By allowing developers to fetching data inside the same place where UI components are created is great for DevX, often called "Colocalization" (sorry, in portuguese it sounds slighly better "Colocalização"), since they don't need to open multiples files/directory to understand how its code connects all together, requiring less cognitive effort to understand the "big picture".
This is pure empiricism but it's not so unusual to see people asking how to use the loader X or Y inside a section, trying to call the function itself.
Inline loaders are just ordinary loaders but written within the same section (or island) file, they are invoked first, and when chosen, the section props are not used, instead the loader props will be used to be fulfilled in the Editor Admin, which means that the developers should declare type which contains properties that will be used by the loader itself and also "passing properties" which are those that will be used only for the target section/component. To show how it will look like, let's rewrite the entire data-fetching
documentation by removing the loader
and using the inline loader
.
Important note: Loaders and inline loaders has the same signature on purpose to make possible refact the code as easy as just copying and pasting the loader code inside the
loaders
folder.
The final code implementation for the DogFacts
tutorial is the following;
a dogApiFacts.ts
file inside /loaders
folder with the following content:
import type { LoaderContext } from "$live/types.ts";
// Return type of this loader
export interface DogFact {
fact: string;
}
// Props type that will be configured in deco.cx's Admin
export interface Props {
numberOfFacts?: number;
}
export default async function dogApiFacts(
_req: Request,
{ state: { $live: { numberOfFacts } } }: LoaderContext<Props>,
): Promise<DogFact[]> {
const { facts } = (await fetch(
`https://dogapi.dog/api/facts?number=${numberOfFacts ?? 1}`,
).then((r) => r.json())) as { facts: string[] };
return facts.map((fact) => ({ fact }));
}
A section named DogFacts.tsx
inside /sections
folder with the following content:
import { DogFact } from "../functions/dogApiFacts.ts";
export interface Props {
title: string;
dogFacts: DogFact[];
}
export default function DogFacts({ title, dogFacts }: Props) {
return (
<div class="p-4">
<h1 class="font-bold">{title}</h1>
<ul>
{dogFacts.map(({ fact }) => <li>{fact}</li>)}
</ul>
</div>
);
}
To see it in action you need to go to the admin and configure the section by selecting the created loader and also fulfill the section props, which is not too complex, but requires at least two files just because we decided to fetch data.
The dogFacts scenario is a perfect fit where you can just fetch from dogFacts APIs without worrying about being interchangeable and it is not so common to have a replacement for it in the short term (specially for a tutorial).
So let's rewrite them;
dogApiFacts.ts
from /loaders
folderDogFacts.tsx
section toimport type { LoaderContext } from "$live/types.ts";
import type { SectionProps } from "$live/mod.ts";
export default function DogFacts(
{ title, dogFacts }: SectionProps<typeof loader>,
) {
return (
<div class="p-4">
<h1 class="font-bold">{title}</h1>
<ul>
{dogFacts.map((fact) => <li>{fact}</li>)}
</ul>
</div>
);
}
// Props type that will be configured in deco.cx's Admin
export interface LoaderProps {
title: string;
numberOfFacts?: number;
}
export async function loader(
_req: Request,
{ state: { $live: { numberOfFacts, title } } }: LoaderContext<LoaderProps>,
) {
const { facts: dogFacts } = (await fetch(
`https://dogapi.dog/api/facts?number=${numberOfFacts ?? 1}`,
).then((r) => r.json())) as { facts: string[] };
return { dogFacts, title };
}
Notice a few things:
title
and pass to the target section even when title is not being used directly.To solve scenarios like this I'm also proposing an advanced usage of loader called PropsLoader
which is basically a way to define a loader for a single property (or selected/multiple) and let the other properties being passed automatically by the framework. Let’s rewrite the previous example again;
import { PropsLoader } from "$live/mod.ts";
import type { LoaderContext } from "$live/types.ts";
// Props type that will be configured in deco.cx's Admin
export interface LoadProps {
title: string;
numberOfFacts?: number;
}
async function dogFacts(
_req: Request,
{ state: { $live: { numberOfFacts } } }: LoaderContext<LoadProps>,
): Promise<string[]> {
const { facts } = (await fetch(
`https://dogapi.dog/api/facts?number=${numberOfFacts ?? 1}`,
).then((r) => r.json())) as { facts: string[] };
return facts;
}
export interface Props {
title: string;
dogFacts: string[];
}
export default function DogFacts({ title, dogFacts }: Props) {
return (
<div class="p-4">
<h1 class="font-bold">{title}</h1>
<ul>
{dogFacts.map((fact) => <li>{fact}</li>)}
</ul>
</div>
);
}
export const loader: PropsLoader<
Props,
LoadProps
> = {
dogFacts,
};
The code is very similar, but you can notice that
I believe that this second option would also be used when multiples fetches must be invoked and the framework handles it in parallel instead of delegating this to the dev.
Async components were an alternative simpler in terms of usability because they will let developers to make fetch calls inside component once (at startup) and use the data in a closure
function, see the example below;
// Props type that will be configured in deco.cx's Admin
export interface Props {
title: string;
numberOfFacts?: number;
}
export default async function DogFacts(
{ title, numberOfFacts }: Props,
) {
const { facts } = (await fetch(
`https://dogapi.dog/api/facts?number=${numberOfFacts ?? 1}`,
).then((r) => r.json())) as { facts: string[] };
return () => (
<div class="p-4">
<h1 class="font-bold">{title}</h1>
<ul>
{facts.map((fact) => <li>{fact}</li>)}
</ul>
</div>
);
}
What I didn't like in this approach is the fact that is needed to wrap the return inside a parameterless function () =>
which will probably lead in many hard-debugging bugs, because if the dev forgot to add the closure function it will work without any fresh hook, causing a lot of confusion, and it's not trivial and elegant to return a closure function, so I decided (together with the team) to not consider this possibility.
[ ] Change section block to allow loader
function
[ ] Update documentation
Author: Marcos Candeia (@mcandeia)
State: Discussion
When using loaders for fetching data from APIs, it is common to need to add (or change) new fields to existing returns. This can be a challenging task when working with imported loaders, such as the ones that returns Product
from schema.org. One feasible solution would be forking the loader source code and apply the necessary modifications, the drawback is that now you need to give up from getting automatic updates from the loader's creators. Another solution would be just import the loader and add your new fields, which makes you to be aware and replicate it for every new loader that is implemented, so let's say you have 10 loaders that returns Products, so now you have to import/export all of them. Let's take the as an example a real-world use case of adding the reviews
of a product (the number of "stars" of a given product).
This task requires changing the aggregateRating
property of Product
types, please notice the following conditions;
ratings
feature can have no access to the source code repository that own the loader source code.One solution to this problem is the use of extension blocks, which allow developers to add new fields to existing types without modifying the source code. Extension blocks provide a way to "extend" types with additional functionality, without having to modify the original source of data.
There are multiple challenges when extending existing types, including;
When a new property needs to be added or modified, the codebase may become fragmented as different parts of the application may be affected. This can lead to increased complexity and difficulty in maintaining the codebase.
If the loaders are dependent on other sites, a change to the property may require updating those dependencies as well. This can lead to conflicts with other parts of the application that depend on different versions of the same site.
Any changes to the property require testing and validation to ensure that they do not introduce bugs or unintended behavior. This can be time-consuming and expensive, especially if the changes affect critical parts of the application.
When a new property is added, it is important to update the documentation and communicate the changes to other developers who may be affected. This can be challenging if there are multiple loaders or if the changes are complex.
Finally, any changes to the property need to be done in a way that maintains backward compatibility and does not break existing code. This can be difficult if the property is deeply integrated into the codebase or if there are many dependent modules.
Extension blocks are implemented using a simple and effective design pattern. The basic idea is to provide a modular way to extend existing code without modifying the source code itself. The implementation is quite straightforward, and it involves a few simple steps.
First, the developer creates a function that takes the original type and returns an extended type. This function is referred to as an extension block. The extension block can be used to add new properties or methods to the original type.
The following code is the code that would be used to add the aggregateRatings
into an existing product instance.
import { Product } from "deco-sites/std/commerce/types.ts";
import {
ConfigYourViews,
RatingFetcher,
} from "deco-sites/std/commerce/yourViews/client.ts";
import { ExtensionOf } from "https://denopkg.com/deco-cx/live@3c5ca2344ff1d8168085a3d5685c57100e6bdedb/blocks/extension.ts";
import { createClient } from "../commerce/yourViews/client.ts";
export type Props = ConfigYourViews;
const aggregateRatingFor =
(fetcher: RatingFetcher) => async ({ isVariantOf }: Product) => {
const productId = isVariantOf!.productGroupID;
const rating = await fetcher(productId);
return rating
? {
"@type": "AggregateRating" as const,
ratingCount: rating.TotalRatings,
ratingValue: rating.Rating,
}
: undefined;
};
export default function AddYourViews(config: Props): ExtensionOf<Product> {
const client = createClient(config);
const aggregateRating = aggregateRatingFor(client.rating.bind(client));
return {
aggregateRating,
};
}
This code should live within the extensions/
folder with an arbitrary name. The format of an extension is basically the same fields as the product that we want to extend by instead of returning the values directly developers can fetch data from APIs for every field that needs to be modified/added. Also, the extension
function is a function that has the following signature
export type ExtFunc<
T,
TBase,
IsParentOptional,
PropIsOptional = IsParentOptional,
> = (
arg: TBase,
current: IsParentOptional extends true ? T | undefined : T,
) => PromiseOrValue<
PropIsOptional extends false ? DeepPartial<T> : DeepPartial<T> | undefined
>;
Where:
aggregateRating
is gonna be the current aggregateRating value.Product
DeepPartial<T>
which means that the result will be merged with the original object.Optionally, when dealing with collections that should be changed as a whole (a new property should be added or changed on each element) a simple property name _forEach
is allowed to provide a function that will be used for each element.
The example below show how to add +10 on every price inside the offers
array (yes, the type Product
has offers.offers proerty (the latter is an array and the former an object)).
export default function Add10Price(): ExtensionOf<Product> {
return {
offers: {
offers: {
_forEach: {
price: (p: Product, curr: number) => curr + 10,
},
},
},
};
WithExtensions
loaderA new loader is being added alongside the extensions block, the WithExtensions
loader, which has basically a single task: get data (products in this case) from loaders and apply the configured extensions transformations. This is a simple loader that can be used on any field that accepts a loader, and it has basically two properties: The data
and the extension
, the WithExtensions
loader is used as a middle-man to get data from loaders and apply the transformations in parallel, merging them together.
This is the proposed implementation for such loader.
export interface Props<T> {
data: T;
extension: Extension<T>;
}
export default async function withExtensions(
_req: Request,
ctx: LoaderContext<Props>,
) {
const extended = await ctx.state.$live.extension?.(ctx.state.$live.data);
return extended?.merged(); // this return the extension applied to the target object
}
As you can see in the previous example the loader contains only one extension
property and not an array of them. This is only for simplicity to avoid code duplication when dealing with multiple extensions, for that, I propose to have a Composite
extension that receive an array of extensions and compose them together as a single one, which makes really easy to allow extensions on other blocks in the future. You can see the proposed code below
import { Extended, Extension } from "$live/blocks/extension.ts";
import { notUndefined } from "$live/engine/core/utils.ts";
import { deepMergeArr } from "$live/utils/object.ts";
import { DeepPartial } from "https://esm.sh/v114/utility-types";
export interface Props {
extensions: Extension[];
}
const apply = <T, R>(param: T) => (f: (arg: T) => Promise<R>) => f(param);
export default function composite({ extensions }: Props) {
return async <TData>(data: TData) => {
const applied = (await Promise.all(
extensions?.filter(notUndefined).map(
apply(data),
),
)) as Extended<TData>[];
return applied.reduce(
(finalObj, extended) =>
deepMergeArr<DeepPartial<TData>>(
finalObj,
extended.value,
),
{},
);
};
}
One key advantage of this approach is that it allows for composability of extensions. Since each extension block creates a separate instance of the extended type, multiple extension blocks can be combined to create even more complex objects. This makes it easy to add new functionality to existing code without modifying the original source.
Overall, extension blocks are a powerful tool for developers looking to extend existing code in a modular and composable way. By allowing for easy extension of existing types and objects, extension blocks help to improve code maintainability and reduce the need for code duplication.
Important to mention that only one task for each persona is required,
For developers who want to extend existing types:
extensions/
folder.For business users:
[ ] Create the extension block
[ ] Update documentation
Currently flags are used as a non-opaque way see this example, audiences are just flags but the user, in this case, routesSelection knows that this is an audience. Ideally flags should be opaque, in the sense of the RoutesSelection should receive the result of routes and overrides and apply only the merge logic.
Flags should return whatever is necessary to be used, flags should evaluate itself and returns its true or false value.
This allow flags to be used everywhere when a block is requested because you can swap-out any Block to a Flag that returns a block instead.
I'd suggest to use upgrade or a similar library to keep the dependencies of the project up-to-date.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.