Git Product home page Git Product logo

happysoup.io's Introduction

HappySoup.io

100% free and open-source impact and dependency analysis for Salesforce

HappySoup.io helps you keep your salesforce org clean and healthy by helping you see metadata dependencies in ways that have never been possible before (at least not for free!).

If you want to support HappySoup.io by helping us cover the Heroku costs, you can make a donation using the link below

👍 Features

  • Impact Analysis (aka "Where is this used") for custom fields, email templates, apex classes, custom labels and many more.
  • The first and only app that creates Deployment Boundaries
  • Easily export the dependencies to Excel, CSV files or package.xml
  • Bypass all the limitations of the MetadataComponentDependency API
  • Intuitive UI, easy-to-follow tree structure
  • Log in from anywhere, no installation is required
  • Available as a web app, local app or Docker app - forget about all security concerns!

Watch full demo

Contents

What is a Happy Soup?

As a long-time customer, you’ve built apps and customizations on the platform for several releases. The more you customize and build on the platform, the more complexity you create in your org. Your single Salesforce org has become a huge container for all the metadata you’re managing and interacting with. We refer to this horn of plenty as your “happy soup.” Trailhead

Who is this for

Developers & Architects

  • Discover Deployment Boundaries that can be the baseline for a scratch org or unlocked packages
  • Quickly get a package.xml of your deployment boundary
  • Get immediate insights with built-in charts
  • Drill down to the last dependent metadata in an easy-to-follow tree structure

Administrators

  • Find all the metadata used in page layout (fields, buttons, inline pages, etc) and export it to excel to review opportunities for optimization
  • Don't break your org! Know the impact of making changes to a field, validation rule, etc

How we enhanced the MetadataComponentDependency API

Salesforce Happy Soup is built on top of the MetadataComponentDependency tooling API. While this API is great, it has huge limitations that make it hard to work with (spoiler: we bypass all these!)

  • Custom field names are returned without the object name and the __c suffix. For example Opportunity.Revenue__c becomes Revenue. This makes it very hard to know which fields are being referenced. The only way around this is to manually and painfully retrieve additional information through the Tooling and Metadata API.
  • Validation rules names are also returned without the object prefix, so Account.ValidationRule becomes ValidationRule. If you want to export this via package.xml, again you'd have to use other APIs to retrieve this information.
  • Objects referenced via a lookup field are not returned. For example, if you have a custom field Account.RelatedToAnotherObject__c pointing to RelatedToAnotherObject__c, that object is not brought back as a dependency, which is wrong because you can't deploy that custom field to an org where that object doesn't exist.
  • Global Value Sets are not returned when picklist fields depend on them.
  • Lookup filters are returned with cryptic names depending on whether they belong to a custom object or a standard one.
  • The app will tell you if a field is used in an apex class in either read or write mode. For example, if a field is used in an assignment expression, then you know the class is assigning values to that field. The app will show you this with a visual indicator; something that the raw API cannot do.

As said above, Salesforce Happy Soup has fixed all these issues so that you can focus on learning about your dependencies rather than fighting the API! 👊

Back to top

🚫 Security

We understand security is very important in the Salesforce ecosystem. Read our Privacy Policy to understand what data is collected and how it is used. This section only addresses technical security

How is your token stored

Your access token will be temporarily stored in a Redis database which is provisioned by Heroku. The token is then retrieved by the server every time you use the app, as long as you have a valid server-side session with the app and the required cookies.

Access to the database is restricted and the credentials are not stored anywhere in the source code; it is managed via environment variables.

This mechanism is the same way Workbench, OrgDoctor, MavensMate and other open source projects work.

Server-side security

Every time a request is made to the app, the request goes through the following layers of security:

  • Every HTTP request is encrypted with SSL certificates managed by Heroku.
  • We use CORS to validate HTTP requests made from a web browser.
  • Once CORS is validated, we check that the request contains a cookie, which is encrypted. The cookie is then used to retrieve a server-side session. If the session does not exist or has expired, the user is sent back to the login page.
  • Once the server-side session is verified, we check that the user has a valid session with their Salesforce org. If the user doesn't have a valid session with Salesforce, we send the user back to the login page.

Back to top

One-click Deployment to your own Heroku Account

You can use the following button to quickly install/deploy the application to your own Heroku Account

Deploy

This is by far the easiest way to use the app on your servers so that you don't have to worry about security.

When you click the button and log in to your Heroku account, you'll see a page similar to the following:

NOTE When you see this page, you can add dummy values on the empty Config Vars. We'll come back and edit them with the real values at a later step.

Once you've added dummy values, just click the Deploy App button. Once the app is deployed, you'll be able to launch it and at a minimum, see the login page. Congratulations!!

Now, the steps to get the app fully working are as follows:

1. Create a Connected App in any org

For the app to be able to use OAuth tokens, it needs to be connected to a Connected App. The original app uses a Connected App that lives in one of our organizations; for your app, you can then use a Connected App in any org as well - it doesn't matter what org it is, but we recommend using a dev or production org because sandboxes are eventually refreshed.

The OAuth configuration for the Connected App needs to look like this:

It is very important that you change the Callback URL to point to your Heroku app domain name, which is the name that you chose when deploying the app

For example, if your app name is mycompany-happysoup.herokuapp.com then the Callback URL must be mycompany-happysoup.herokuapp.com/oauth2/callback. You must also add the following URL so that you can run the app locally using the heroku local command

http://localhost:3000/oauth2/callback

Note that if you changed the default PORT environment variable in the deployment page, you need to update the localhost port in the callback URL as well.

Once you have created the Connected App, get the Client Secret and Client Id; we'll need them in the next step.

Important: Make sure you uncheck "Require Proof Key for Code Exchange (PKCE) Extension for Supported Authorization Flows" setting on the Connected App.

2. Editing the Config Vars

Finally, we come back to the Config Vars.

You can edit the Config Vars at at https://dashboard.heroku.com/apps/YOUR-APP-NAME > Settings > Reveal Config Vars

All the other variables should be configured already, including REDIS_URL which is automatically added by Heroku since Redis is required to deploy the app.

These are the Config Vars that you MUST add for the app to work:

OAUTH_CLIENT_ID: This is the Client Id that you just got from your connected app.

OAUTH_CLIENT_SECRET: This is the Client Secret that you just got from your connected app.

SESSION_SECRET: Just put any random string, like 349605ygtdhht%&^&^ (NOT this one though!)

CORS_DOMAINS: This must be the full URL of your Heroku app. For example, my version of the app lives at https://sfdc-happy-soup.herokuapp.com

You must specify the full URL of your Heroku app, which is the App Name that you provided at the very beginning. So your full URL will be https://THE-NAME-YOU-CHOSE.herokuapp.com

That's it! Now you can use the app on your servers.

Docker Deployment

If you want to use the app locally on your computer, you can easily create the app using Docker. Just follow the tutorial and you'll be up and running in minutes!

Tutorial: Installing HappySoup.io with Docker

These steps describe the process in the video above, using example text (in bold) which should be updated to match your environment:

Prerequisites Not Covered here

  1. Install Docker and docker-compose
  2. Clone the git repository
  3. Admin access granted to a Salesforce organization

Create a Salesforce Connected App

  1. Setup > Apps > Connected Apps > New (alternate path: Setup > Apps > App Manager > New Connected App)
  2. Connected App Name: Salesforce Happy Soup
  3. API Name: Salesforce_Happy_Soup
  4. Contact Email: [email protected]
  5. Enable Oath Settings: (checked)
  6. Callback URL: http://localhost:3000/oauth2/callback
  7. Selected OAuth Scopes: Access the identity URL service (id, profile, email, address, phone); Manage user data via APIs (api); Full access (full); Perform requests at any time (refresh_token, offline_access)
  8. Save

Connect Happy Soup to Salesforce

  1. Open the docker-compose.yml file in the git repository.
  2. Update OAUTH_CLIENT_ID & OAUTH_CLIENT_SECRET with the values from the new Connected App. Note that each is defined twice and both should be updated.
  3. Start the Docker containers: docker-compose up
  4. In a web browser, open http://localhost:3000
  5. Login Type: My Domain or Production
  6. If using My Domain enter in your Salesforce organization URL.
  7. I agree to the Happy Soup Privacy Policy: (checked)
  8. Log in with Salesforce
  9. Allow Access? Allow

Back to top

Build your own apps using the core npm library

Salesforce Happy Soup is built on top of the sfdc-soup NodeJs library, which is an API that returns an entire salesforce dependency tree in different formats, including JSON, excel and others.

Head over that its repository to learn how you can create your apps.

Back to top

Privacy Policy

It's important that you understand what information Happy Soup collects, uses and how you can control it.

Remember that you can always deploy the app to your own Heroku account or use it locally, in which case you don't need to worry about security.

Our full Privacy Policy can be found here. The sections below contain the specifics about how your Salesforce data is used and what your options are to stop access to your data.

Information Collected

Your Personal Information

Your Salesforce username, email and display name will be captured when you log in to Happy Soup.

This information is used to display your username details on the header of the Happy Soup app so that you can easily know which org you are logged into.

Your Salesforce Org Id and User Id (not the username/email) are also used as a key to submit asynchronous jobs to Happy Soup's app server. This allows us to group all your requests in a single area of the database.

Your Salesforce Org's Metadata

To be able to analyze your dependencies, we need to query your org's metadata. Some metadata is queried only to get its name, while other metadata is queried to inspect its contents and find dependencies (i.e apex classes)

The specific objects that are queried are as follows

  • MetadataComponentDependency
  • CustomField
  • CustomObject
  • ApexClass
  • EmailTemplate
  • Layout
  • ValidationRule
  • CustomLabel
  • WebLink
  • LightningComponentBundle
  • AuraComponentBundle

Other objects may be added as we further enhance our dependency analysis capabilities.

All this metadata, along with the results of a dependency query that you execute via the UI, will be cached in a secure server-side session that is isolated to your session with Happy Soup.

This metadata is cached to enable subsequent requests to be performed faster.

The session data and its cache is deleted when any of the below options occurs first:

  • 8 hours have passed since you logged into Happy Soup. This is because the access token provided by Salesforce will also live for 8 hours. This means that you can use Happy Soup for a maximum of 8 hours using the same org, without having to log in again. After 8 hours, the session is completely deleted.
  • When you log out manually. When a logout action is performed, the session is completely deleted.
  • The app tries to issue a request to Salesforce but the access token has been revoked. When this occurs, the session is completely deleted.

Cookies

We use cookies and local storage for the following information:

  • Your session id cookie
  • The Salesforce domain you used. This will help you quickly log in the next time you use the app.

Third-Party Apps/Providers

Happy Soup uses the following software:

  • Heroku Redis: Used to store your session and to process all the jobs that are submitted to the app.
  • Logentries: Logging and monitoring software. Logs are stored for 7 days and some logs may include the names of your metadata. For example, when submitting a job to see the usage of a custom field, the custom field name is appended to the URL. This URL will be in the logs for a maximum of 7 days.

Your Rights

Right to be forgotten

If you want Happy Soup to immediately delete all the data we have collected from your org, you can use the Logout button on the main page.

When this button is clicked, the server session is completely deleted and cannot be recovered.

If you no longer have access to the browser or device from which you initiated a session with Happy Soup but still want to prevent Happy Soup from accessing your org's metadata, you can go to your Salesforce org > Setup > Connected Apps Oauth Usage > Find the token for Salesforce Happy Soup and revoke it.

Happy Soup will no longer be able to use the access token and you'll be logged out the moment you try to use the app again.

Right to Access Data

If at any time you want to get the data that we have from your org, you can contact us at [email protected]. Note that because all the data we collect from you is deleted in 8 hours, we can only provide you with your data if it's still in our database.

Right of Restriction of Processing

If at any time you want Happy Soup to stop processing your data and you are unable to log out (because you no longer have access to the original device you logged in with), you can email us at [email protected] and we will delete all your information.

Back to top

happysoup.io's People

Contributors

chris-lindbergh-uplight-com avatar jamessimone avatar pgonzaleznetwork avatar rubenhalman avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

happysoup.io's Issues

Master Detail Relationship Dependencies

It seems that Master-Detail relationships between custom objects are not considered when returning dependency information for a custom object. Have I got that right, or perhaps could it be supported in the project but is not yet supported on the HappySoup.io site?

Support for workflow rules

First we need to query the ids of the WorkflowRule Tooling object that match on the TableOrEnumId of the field in question.

We can then calculate their name based on the name field and the tableOrEnumId.

Once we have the full names we can read their contents with the metadata api

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:met="http://soap.sforce.com/2006/04/metadata">
   <soapenv:Header>
      <met:CallOptions>
      </met:CallOptions>
      <met:SessionHeader>
         <met:sessionId>00D3h000005XLUw!AQkAQELTH2m.Lqef4jZlh70K3wgF7T_WwTWzX_Br88m0nb01OBut1Y0V9aehGKJ_uwn19HSQy9GS8R51yPeD21FoDZLiujc_</met:sessionId>
      </met:SessionHeader>
   </soapenv:Header>
   <soapenv:Body>
      <met:readMetadata>
         <met:type>WorkflowRule</met:type>
         <!--Zero or more repetitions:-->
         <met:fullNames>Boat__c.My Workflow Rule 2</met:fullNames>
         <met:fullNames>Boat__c.My Workflow Rule</met:fullNames>
      </met:readMetadata>
   </soapenv:Body>
</soapenv:Envelope>

Convert the response to json and extract the contents

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns="http://soap.sforce.com/2006/04/metadata" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
   <soapenv:Body>
      <readMetadataResponse>
         <result>
            <records xsi:type="WorkflowRule">
               <fullName>Boat__c.My Workflow Rule 2</fullName>
               <active>false</active>
               <formula>isblank( Boat_Image__c )</formula>
               <triggerType>onCreateOrTriggeringUpdate</triggerType>
            </records>
            <records xsi:type="WorkflowRule">
               <fullName>Boat__c.My Workflow Rule</fullName>
               <actions>
                  <name>my_email_alert</name>
                  <type>Alert</type>
               </actions>
               <active>false</active>
               <criteriaItems>
                  <field>Boat__c.Name</field>
                  <operation>equals</operation>
                  <value>trigger</value>
               </criteriaItems>
               <criteriaItems>
                  <field>Boat__c.Picture__c</field>
                  <operation>equals</operation>
                  <value>value</value>
               </criteriaItems>
               <triggerType>onCreateOrTriggeringUpdate</triggerType>
            </records>
         </result>
      </readMetadataResponse>
   </soapenv:Body>
</soapenv:Envelope>

We then go through each json object, and if the field matches, we add it to the usage list.

How to handle multiple sessions?

If a user goes back to the home page (which they could bookmark), they won't know that a session is already stablished. If they log in to the same org again, the cache will be invalidated, which is undesired.

The home page could redirect the user to the main page if an API call determines that the cookie is a valid session, however this redirect behaviour is not a good experience.

Also the user should be able to log in to different environments, without losing the cache for another one. But we can't force the cookie to be different, because that defeats the whole purpose.

One option would be to somehow tie a single cookie to multiple server sessions.

Possible to add Opportunity Contact Role field

We're using opportunity contact roles a lot and I've been alerted that it seems the Role field is automatically changing from whatever the user sets to Decision Maker on all those added. I don't see any flows, workflows, or process builders which affect it so I'm at a loss.

Note: I'm newer with the company and we just merged two orgs together so that's really why I have no idea what could be affecting this.

Oauth redirect_uri should be dynamic

At the moment is is dynamic in the client side but not in the server side (it's hardcoded). Perhaps the client should pass an additional property on the state object, to let the server know which redirect_uri it needs to use.

Maybe not a bug

This may not be a bug but a lot of metadata isn't displayed when using the tool (at least with standard fields). I had the task to find where the Opportunity.Amount field was used on an Org and the only Metadata it showed was Worflow Rules, Workflow Field Update and Validation Rules.
But it left out a lot of formula fields, Flows, Process Builders, list views, fieldsets and layouts. Can't say for code because there was no code related (at least no apex showed up for me when searching on VSCode).
But it would be a great plus to be able to display all those other metadata

Use EntityParticle to get list of field names instead of listMetadata()

listMetadata() returns the 18 digit id which I need for subsequent queries, but it doesn't return the field label, which is a problem because sometimes the API name is totally different from the label. Ideally in the UI, custom fields would show up as follows

Customer Priority (Customer_Priority__c)

For this, we can use the EntityParticle object instead

SELECT QualifiedApiName, Label, DataType, DurableId FROM EntityParticle WHERE EntityDefinition.QualifiedApiName IN ('Account')

This will return the label. The id that is returned in the durableId is the 15 digit one, but I could convert it to 18 using this NPM package or the following code

https://www.npmjs.com/package/eighteen-digit-salesforce-id

(function (w) {
    w.normalizeId = function (id) {
        var i, j, flags, alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZ012345",
            isUppercase = function(c) {
                return c >= "A" && c <= "Z";
            };
        
        if (id == null) return id;
        id = id.replace(/\W/g, "");
        if (id.length != 15) { return id; }

        for (i = 0; i < 3; i++) {
            flags = 0;
            for (j = 0; j < 5; j++) {
                if (isUppercase(id.charAt(i * 5 + j))) { flags += 1 << j; }
            }
            id += alphabet.charAt(flags);
        }
        return id;
    }
})(window);

A drawback is that the query needs to be filtered by the object. I need to figure out if the inconvenience of selecting the object name is worth the benefit.

Converting circular structure to JSON

Some visualforce pages throw the following error when we try to calculate the deployment boundary

["TypeError: Converting circular structure to JSON\n --> starting at object with constructor 'Object'\n | property 'ApexClass' -> object with constructor 'Array'\n | index 0 -> object with constructor 'Object'\n --- property 'references' closes the circle\n at JSON.stringify (<anonymous>)\n at Object.tryCatch (/app/node_modules/bull/lib/utils.js:5:15)\n at /app/node_modules/bull/lib/job.js:220:25\n at processTicksAndRejections (internal/process/task_queues.js:93:5)"]

I suspect this has something to do with the controller referencing the page again, causing a circular dependency.

Needs further investigation...

Pooling enhancements

The pooling to the jobs api should be every second because it takes around 7 ms for the API to respond, so we are doing more calls than the server can reply to in a timely manner.

Also, the pooling should stop when the job status reports an error

{"jobId":"[email protected]:deps-01q34000000TjVKAA01600103925921","state":"failed","progress":0,"reason":"no-sfdc-connection"}

Rollup summary fields

When doing impact analysis on a custom field, if the field is used by a RLS field, that field should be in the results.

This might be hard to implement because I'd need to scan all the fields of the source object, and see which ones are of type RLS, and then see if a reference to the field is found. Because of long processing times, this should be a topping, not a default setting.

Use of semantic release

We should use semantic release to have standardised commit messages and to allow for future CI/CD.

Not allowed by CORS at origin

I am testing happysoup locally using docker and I am seeing a CORS error while selecting the Metadata Type from UI(Localhost:3000). I have checked all the CORS settings in Vue.config.js and app.json, everything seems fine. Not sure what is happening. Please see below for error details.

We are sorry, something went wrong. Please click here to log a Github issue so that we can review the error. Please include the following details: Not allowed by CORS

Error: Not allowed by CORS at origin (/app/backend/routes/api/router.js:19:16) at /app/node_modules/cors/lib/index.js:219:13 at optionsCallback (/app/node_modules/cors/lib/index.js:199:9) at corsMiddleware (/app/node_modules/cors/lib/index.js:204:7) at Layer.handle [as handle_request] (/app/node_modules/express/lib/router/layer.js:95:5) at next (/app/node_modules/express/lib/router/route.js:137:13) at next (/app/node_modules/express/lib/router/route.js:131:14) at next (/app/node_modules/express/lib/router/route.js:131:14) at Route.dispatch (/app/node_modules/express/lib/router/route.js:112:3) at Layer.handle [as handle_request] (/app/node_modules/express/lib/router/layer.js:95:5)

RecordType field dependencies

Hi,
Recently I came through a requirements to search for a custom and standard recordTypeId or recordType.Name dependencies in an org's metadata. Looking at a feature to support it in HappySoup.
Thanks

Automated job should remove stalled redis keys

For example

bull:happy-soup:completed:00D7F0000fafaJUAQ.0057F00000fafAC:usage-00X7F0ffVgqjUAC-EmailTemplate1608274692249

I would need to make sure this key is really not needed any more. Perhaps look at the created time?

Also the completed stats need to be deleted by a cron job, i.e

Unable to find the field dependency of account.type

We are sorry, something went wrong. Please click here to log a Github issue so that we can review the error. Please include the following details: job stalled more than allowable limit

Error: Missing lock for job 00D3M0000008mntUAA.0053M000000VDVFQA4:usage-Account.Type-StandardField1639470643103 finished at Object.finishedErrors (/app/node_modules/bull/lib/scripts.js:185:16) at /app/node_modules/bull/lib/scripts.js:172:23 at runMicrotasks () at processTicksAndRejections (internal/process/task_queues.js:95:5)

sObject type 'ApexClass' is not supported

398 <190>1 2020-12-16T08:23:30.688766+00:00 app worker.1 - - HAPPY SOUP ERROR: Tooling API call failed {"request":"***.my.salesforce.com/services/data/v48.0/tooling/query/?q=SELECT%20Id%2CName%2CNamespacePrefix%20FROM%20ApexClass","jsonResponse":[{"message":"sObject type 'ApexClass' is not supported.","errorCode":"INVALID_TYPE"}]}

Client side async requests should be enclosed in catch/try

For example, when the server times out

at=error code=H12 desc="Request timeout" method=GET path="/api/metadata?mdtype=ApexClass" host=sfdc-happy-soup.herokuapp.com request_id=9f2a2f56-cbd2-4ef8-b709-4ac6a84a4ae5 fwd="37.228.244.96" dyno=web.1 connect=1ms service=30003ms status=503 bytes=0 protocol=https

async function getMetadataMembers

The above function never finishes properly, and the spinner loads forever.

Memory Quota Exceeded

020-12-18 04:38:05.431 450 <134>1 2020-12-18T04:36:33+00:00 app heroku-redis - - source=REDIS addon=redis-round-11563 sample#active-connections=9 sample#load-avg-1m=0.045 sample#load-avg-5m=0.025 sample#load-avg-15m=0.005 sample#read-iops=0 sample#write-iops=4.3133 sample#memory-total=15664244kB sample#memory-free=11689388kB sample#memory-cached=3122316kB sample#memory-redis=2133592bytes sample#hit-rate=0.029768 sample#evicted-keys=0
2020-12-18 04:56:14.522 132 <134>1 2020-12-18T04:56:14.182331+00:00 heroku worker.1 - - Process running mem=558M(109.1%)
2020-12-18 04:56:14.522 133 <134>1 2020-12-18T04:56:14.184529+00:00 heroku worker.1 - - Error R14 (Memory quota exceeded)

https://devcenter.heroku.com/articles/heroku-redis#maxmemory-policy
https://devcenter.heroku.com/articles/metrics#memory-usage
https://devcenter.heroku.com/articles/optimizing-dyno-usage#basic-methodology-for-optimizing-memory

Check flexipages for dependencies

We have plenty of components on our Lightning Pages that have record visibility filters, which the tool is not picking up.
image

This is stored as visibility rules on the flexipage metadata files
image

And it's not being detected by the tool
image

Impact Analysis for Picklist Values

It would be great if Happy Soup could perform an impact analysis on picklist values.

For example, my team had to change all "Working" picklist values to "Active". Thus, we had to comb through metadata to find which picklist fields had a value of "Working". This was extremely difficult and time consuming.

Impact analysis should include profile and permission sets

SELECT PermissionSetId
FROM PermissionSetAssignment
WHERE AssigneeId = '00580000006ZFwd'
 
select id
    from permissionset
    where id in ()
 
select Id, SObjectType, Field, PermissionsRead, PermissionsEdit
from FieldPermissions
where parentid in (
   
    )
and SObjectType = 'Asset'

404 when connecting to localhost (docker)

Hello,

I am trying to run Happy Soup locally using Docker via these instructions.

npm run dc-build seems to run successfully; however, when I try to connect via http://localhost:3000, I get the following error:

{ "error": true, "statusCode": 404, "message": "Not Found", "stack": "NotFoundError: Not Found\n at /app/backend/server/app.js:45:8\n at Layer.handle [as handle_request] (/app/node_modules/express/lib/router/layer.js:95:5)\n at trim_prefix (/app/node_modules/express/lib/router/index.js:317:13)\n at /app/node_modules/express/lib/router/index.js:284:7\n at Function.process_params (/app/node_modules/express/lib/router/index.js:335:12)\n at next (/app/node_modules/express/lib/router/index.js:275:10)\n at /app/backend/server/app.js:25:5\n at Layer.handle [as handle_request] (/app/node_modules/express/lib/router/layer.js:95:5)\n at trim_prefix (/app/node_modules/express/lib/router/index.js:317:13)\n at /app/node_modules/express/lib/router/index.js:284:7" }

Here is my docker-compose.yml (keys redacted)

version: '3'

services:
  redis:
    image: 'redis'
  webapp:
    build:
      context: .
      dockerfile: ./docker/web/Dockerfile
    ports:
      - "3000:3000"
    environment:
      - REDIS_HOST=redis
      - REDIS_PORT=6379
      - OAUTH_CLIENT_ID=#KEY
      - OAUTH_CLIENT_SECRET=#KEY
      - SFDC_API_VERSION=49.0
      - SESSION_SECRET=whatever
      - PORT=3000
      - CORS_DOMAINS=http://localhost:3000,https://sfdc-happy-soup.herokuapp.com,https://happysoup.io
      - ENFORCE_SSL=false
  worker:
    build:
      context: .
      dockerfile: ./docker/worker/Dockerfile
    environment:
      - REDIS_HOST=redis
      - REDIS_PORT=6379
      - OAUTH_CLIENT_ID=#KEY
      - OAUTH_CLIENT_SECRET=#KEY
      - SFDC_API_VERSION=49.0
      - SESSION_SECRET=whatever
      - PORT=3000
      - CORS_DOMAINS=http://localhost:3000,https://sfdc-happy-soup.herokuapp.com,https://happysoup.io
      - ENFORCE_SSL=false

Any assistance is greatly appreciated!!

Problem while querying Email Templates

We are sorry, something went wrong. Please click here to log a Github issue with the following details: Server Error Error: Server Error at Object.query (/app/sfdc_apis/tooling.js:42:28) at processTicksAndRejections (internal/process/task_queues.js:93:5) at async findReferences (/app/sfdc_apis/metadata-types/EmailTemplate.js:24:25) at async seachAdditionalReferences (/app/sfdc_apis/usage.js:324:36) at async Object.getUsage (/app/sfdc_apis/usage.js:17:40) at async Object.usageJob (/app/services/jobs.js:69:20) at async Queue. (/app/services/worker.js:36:20)

Autolaunched Flows Referenced in Process Builder

I'm not sure if this is something that isn't supported but I tried using it to search for a Flow that I know is referenced in Process Builder. It wasn't able to identify it and I got the following message.

Unable to find any metadata items that use or reference the Flow FLOW_NAME_HERE. This either means that the Flow is not being used at all or that it is used by metadata types that are not yet supported by HappySoup.I

Order.Status field not showing up

Using the Impact Analysis tool, filtering by Standard Fields, when I type "Status" it only suggests the fields Case.Status and Lead. Status, but Order.Status is missing. The logged in user has access to this field. Am I doing something wrong?

Add Product2.Description

Was just looking to see where we were using this, doesn't seem like it's supported by Happy Soup yet. Can this be added?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.