Git Product home page Git Product logo

relativity-trace-documentation's People

Contributors

agnieszkachmielowiec avatar akorczynskikcura avatar atien23 avatar brianheinz avatar calimetikcura avatar chiawang avatar clemrelativity avatar daccetta2 avatar david-bachmann avatar digitalsaucepro avatar enaujokas avatar grzegorz-pasternak avatar jaimebecker avatar jwhitingrel avatar kasiawidlak avatar krzysztof-kukla1974 avatar mateuszluczak avatar mgibson1 avatar mireksk avatar mishakogan avatar mjdockman avatar nmaa75 avatar peterhallerrel avatar raekaras avatar seanu avatar sethmk avatar soniachy avatar tracequeena avatar wfreltrace avatar yellokondi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

relativity-trace-documentation's Issues

Make improvements to the Terms section of policy creation

I'm looking at our documentation for Terms, and again, it is sorely lacking. Nowhere do we direct Trace clients to dtSearch pages, the search syntax that Trace Terms is based on. And again, we don't detail the field that is populated for when a Term hits. Or the fact that when you create a Term, it's automatically run. Or the fact that when there's 1 term with bad syntax, that brings the whole Terms Searching task to a halt.

Broken links

We need to go through documentation and update the following:

  • Broken Image links
  • Images that aren't in the Aero design (aka old images)
  • Images that are not of high enough quality
  • Broken Links (and ones that don't go to R1 documentation as some go to old documentation)

Trace Shipper System Requirements

Trace Shipper System Requirements mixed a few things:

  1. Trace Shipper requirements.
  2. Merge1 requirements.

For instance Trace Shipper doesn't require:

  1. SQL DB.
  2. .Net Framework 3.5.
  3. Microsoft Visual C++ 2017 Redistributable.

Please review - pre-requisites -> deployment option A in Word doc:
https://kcura.sharepoint.com/:w:/r/sites/TraceTeam/_layouts/15/Doc.aspx?sourcedoc=%7B0EC71A4F-0D55-4708-9FB1-CB88655A1D81%7D&file=Relativity%20Trace%20Data%20Sources%20Guide.docx&wdLOR=cDD31A502-9A7A-5B4F-8E9A-A3F5E361B166&nav=eyJjIjo2MTA4MzA4NzN9&action=default&mobileredirect=true&cid=2036bfa1-daba-4034-9031-e333b58b3d0a

Also, regarding disk - per doc it is 300GB - this is a way too small. We drop folders are on the same disk than we require at least 1 TB.

Trace Shipper installation prerequisit

We should add a note that before installing Trace Shipper, the user MUST do the following connectivity test:

  1. Install ROSE on the Shipper VM.
  2. Login to RelativityOne with the same user/password which will be then used by Trace Shipper to authenticate.
  3. Navigate to Workspace default Fileshare (there might be multiple Fileshares on the instance).
  4. Navigate to Files -> EDDS folder.
  5. Create a temp folder there.
  6. Upload sample file.
  7. Download the file.

Potential issues:

  1. ROSE cannot connect - probably IP whitelisting / TCP ports issue.
  2. ROSE cannot authenticate - user doesn't have enough permission to authenticate.
  3. Transfer failed - probably UDP ports issue.

Authorization page

We use authorization in this documentation, but then on the data source we call it Credentials. Feels like we could have more consistency here as well as have the data source documentation page where credentials is mention better link to the authorization documentatino page.

Update Ingestion Profile on client feedback

Ingestion Profiles - Integration Points - Documentation (relativitydev.github.io)
· When setting up an Ingestion Profile for the O365 email data being read in, the documentation has a link to what appears to be a Relativity v9 app “Create & Map Field Catalog – Full” – we don’t have this in our app library. Is there a current version of this app we are supposed to use for RelOne? For testing purposes, I’ve just manually set up a mapping for DocID
· There is nothing in the documentation for actually setting up Integration Points, so I haven’t done this.
· Although the documentation is also missing Appendix A and B, and just has Appendix C – am I missing anything I need here?
· I noticed I have duplicate values for a lot of items for some reason – eg, Data Mapping has two entries for most values.

Document cold start scenario

We should have documented special considerations related to Cold Start scenario.

  1. How to tweak Collect Data Source parameters?
  2. How to avoid throttling issues?
  3. How to monitor the system and adjust?
  4. How to use "merge data batches during cold start" parameter?
  5. How to configure Trace -> Setup -> Data Retrieval -> Run Interval - to not produce too many Data Batches on start and to not kill the 10 collection jobs in parallel queue.

Example:

  1. Stop Teams connector.
  2. Configure Trace -> Setup -> Data Retrieval -> Run Interval to 1 hour - so, only 1 Data Batch (each type) will be created per 1 hour.
  3. Abandon all Data Batches which were previously run (for instance with incorrect setup) - that would cancel all underlying Collect jobs.
  4. Reconfigure Data Source.
  5. Set Start Date to XYX.
  6. Set Frequency in Minutes to 1440.
  7. Set Merge Batches During Cold Start to Yes.
  8. Set Max Number of Batches To Merge to 30 <<< this means we'll create Data Batches each 30 day long.
  9. Reset Data Source.
  10. Enable Data Source.

Update RIP Agents in Trace Setup

It appears that RIP currently has only one type of Agent ("Integration Points Agent"). The "Integration Points Manager" Agent is no longer available in the RIP rap. Please remove all references to the "Integration Points Manager" Agent in documentation. This RIP Agent is referenced multiple times in this page in the "Agents" and "Setting up Relativity Trace" sections.
This page is linked in a shareable PDF Trace Ops provides Trace Partners (and potentially savvy Relativity Customers) outlining Trace ARM Restore steps.

Google Mail, Chat and Drive

Currently, we support 3 Google Data Sources: Google Mail, Google Chat and Google Drive. However, per our Data Source documentation - we divided Goggle connectors into 3 parts: Google Suite, Google Chat and Google Drive which is inconsistent. We should either leave one connector - Google Suite or we should have 3: Mail, Chat and Drive.

Mark "logging" within Setup as deprecated

This functionality is no longer supported even though it has not yet been removed from the product. Please mark it as deprecated so users understand it's not something necessary for them to use.

Update Trace document extraction fields

Update Appendix B: Trace Document Extraction Fields based on Trace_Metadata- a more updated metadata dictionary that we send to clients during pre-sales conversations.

Update data batch in proactive ingestion API missing loadfile path

Need to update the PUT statement body to include Load File Path field like this:

{
"Status": {
"Guids": [
"32452D3D-35D2-4FF5-92E6-1DD01D755482"
],
"Artifact Type Name": "Choice"
},
"Load File Path": "DataTransfer\Import\Trace BIST Provider\1041141\1041337_ready\loadfile.dedup.updates.dat",
"DocumentCount_LoadFileGenerated": "500",
"Timestamp_LoadFileGenerated": "3/27/2018 4:15:13 PM",
"Parent Artifact": {
"Artifact ID": 1003663
},
"Artifact Type Name": "Data Batch",
"Artifact ID": 1044380
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.