Git Product home page Git Product logo

depthai-hardware's People

Contributors

cafemoloko avatar charliechen123 avatar danil-tolkachev avatar dohanalux avatar eric-schleicher avatar erol444 avatar gmacario avatar gnejc avatar jakaskerl avatar joshisandesh avatar lafkyc avatar lazsam29 avatar luxonis-blaz avatar luxonis-brandon avatar luxonis-brian avatar luxonis-david avatar luxonis-vlad avatar madgrizzle avatar martindebevc7 avatar moj0 avatar moratom avatar priamb avatar shoemakerlevy9 avatar stuartsmoore avatar sungtzuwen avatar szabolcsgergely avatar themarpe avatar vandavv avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

depthai-hardware's Issues

Cameraless OAK-USB

I'm looking at using one of my spare OAK-D-Lites to 'extend' my pipeline and hopefully increase the FPS, but think it's a bit of a waste of resources when I'm not using the cameras on it. As I understand it, the NCS isn't compatible with DepthAI (not sure if it could be made to be) but a cameraless OAK with USB interface (stick form-factor) would be nice.

OAK-D-LITE

Start with the why:

TLDR: Get DepthAI to the masses.

The existing DepthAI line has proven useful to many, but so many have lost out on learning about its capability and being able to build with it because it is too expensive for them to justify purchasing it without first knowing for sure if they can pull off what they want with it. At $199 for the lowest-price DepthAI, that's a big ask for a lot of people. Particularly people who don't immediately know if it works for their application.

This problem is confounded by the fact that we're not good at marketing or sales. So we have not done a good job showing people that it is good for their application. So we can solve the marketing problem to solve this, or we can make a device that is low-cost enough that people can buy it even if they don't yet know if it will work for them.

So if we make this low-cost variant, and price it low enough, people will buy it not yet knowing if it solves their problem.

This enables folks to then buy it and use if for things for which we did not even know it would be useful. If it's low enough cost, it enables all sorts of engineers to take the plunge and then educate us about how the thing can be used. Folks will find all sorts of new uses for it.

And importantly, it gets it into the hands of folks who have discretionary income for a $99 device, but not for a $199 device. Which is likely a LOT more people.

Move to the how:

So how in the world can we make a $99 DepthAI? Before, there was no way, we couldn't even produce the device for that - mainly because we didn't have the necessary degrees of freedom on camera modules.

But with ArduCam now working closely with us, we can use slightly-lower-spec image sensors that are approximately an order-of magnitude less expensive, which then allows us to hopefully build a complete, 3-camera DepthAI that is less than $99 MSRP. Which is huge in terms of folks being able to buy it at risk compared to $199 MSRP.

And this might allow us to hit $75 KickStarter Early-Bird for example. But even at $99, this is allows so many more folks to buy and experiment with it.

And pivotally, many applications, maybe something like 90% of them, can deal with these lower-spec modules without a problem.

Move to the what:

The OAK-D-LITE, with the following specs:

  • OV7251 (640x480) instead of OV9282 (1280x800)
  • IMX214 (12MP) instead of IMX378 (12MP), just lower sensitivity and other specs.
  • No enclosure, just the PCBA.
  • Remove the power jack (less complex power electronics, but requires USB3-capable host, or USB2 host that can provide 900mA - RPi3 and RP4 can, for exmaple).

Simplify and cost-down the design where possible, perhaps not listed above.

IR Dot Projector + IR-capable Global Shutter Grayscale

Start with the why:

For high-granularity depth and for support of low-visual-interest objects, having a dot projector to augment these surfaces with visual interest is extremely beneficial.

Move to the how:

Leverage IR-capable camera modules (either visible-light + IR or IR-bandpass, such as the upcoming ArduCam OV9282 modules) and an IR dot projector pair (and timing/strobe circuitry for more efficient power use) to make an active-illumination variant of DepthAI.

Move to the what:

Support active illumination on DepthAI.

Pre-flashed 32GB uSD Cards for Raspberry Pi

Start with the why:

  • Setting up software is annoying.
  • Compiling and/or installing software via a CLI is even more annoying.
  • And doing so on resource-constrained devices (like the Raspberry Pi) is even more annoying.

Meanwhile, DepthAI and CEP (https://github.com/cortictechnology/cep#cortic-edge-computing-platform-cep) are AWESOME on the Pi, once you're done with Annoying^3 power above.

And for example, installing CEP alone takes 30 minutes (after you've cloned the repo). And cloning the repo itself may take a very long time, depending on the internet connection. And the installation process can sometimes kill SD cards (because of the compilation/etc. uses SWAP aggressively).

Given that we have thousands of customers using DepthAI and CEP on Raspberry Pi, that's a TON of time wasted by customers - waiting on things to download/compile/install. Best-case, it's 30 minutes if folks want CEP in addition to DepthAI.

Worse, the whole purpose of CEP is to be easy. Look at that drag-and-drop awesome:

visual-programming

It's such a buzz-kill to fight CLI stuff and wait for 30+ minutes of "Is this really going to work?" to just try it out.

So it would be cool if there were a way to prevent that buzz-kill (the annoyance^3). And save at least 30,000-minutes of folks time (1,000 folks, bare-minimum will use this, and it's 30 minutes for each of them). That's 500 hours of wasted time.

Worse, often customer uSD cards are too-small to fit all the nifty CEP software (which takes 10GB or so). So someone with an 8GB or 16GB card will discover 30+ minutes into this that it's not going to work, at all.

And even worse, the process of trying to install actually just corrupted their already-working 8GB or 16GB uSD card. And that's a really painful way to experience a new product, with a typical end-result of these customers being:

  • The installation failed, and it nuked my Raspberry Pi OS install.

Using a 32GB uSD card prevents this.

Move to the how:

One idea is to do all this installation once to a "Golden 32GB" uSD. And then copy it 32GB uSD cards at scale, and sell these pre-flashed uSD cards. There are huge advantages of this:

  • Each block in each flashed uSD is only written once. Compare this with compilation/installation, where a single block could be written hundreds of thousands of times (hence wearing out the card and/or potentially killing it).
  • The purchaser of the card has one step only: Install the uSD card in the Pi. Boom. Done. Check it out below, so nice:

Insert-SD-Card
(Ignore that the GIF is of a 16GB uSD card; I couldn't find a good GIF of a 32GB one. Rest assured; we'll be doing 32GB.)

  • There's no guessing on the customer's part if they "Did something wrong", in addition of course to saving them 30+ minutes of annoyance.

Move to the what:

Make 32GB pre-flashed uSD card that has everything pre-installed. Is there some step that we could do so a customer doesn't have to? Then we'll do that.

BW1099EMB NOR Flash Communications

I'm having issues communicating with the NOR flash on the BW099EMB through the DepthAI Embedded Development Kit.

In the BW1099EMB datasheet, it states there are 1K Pull-Up resistors on the SPI0_SIO2 and SPI0_SIO3 signals, which are hardwired to the NOR flash ~WP and ~HOLD signals. The dev kit doesn't have the ~WP and ~HOLD signals routed to the Molex connectors where I can control them externally, so I've probed the MT25Q flash physically on pin 7 and 3, and both were 0V.

I'm assuming the issue is the ~HOLD signal being asserted which is preventing proper SPI communications. Is there a way to verify the pull-up resistors on the SoM, or maybe something is pulling ~WP and ~HOLD down? I've performed combinations of holding the DepthAI in reset and not in reset, and both cases ~HOLD was asserted to ground.

Thanks guys.

Cell-Module Version of DepthAI

Start with the why:

In many cases it's advantageous to have DepthAI deployed remotely and to be able to selectibly (based on what it sees, or just settings) return metadata, images, and/or video - without need for hard wiring or Ethernet. If no video (and perhaps no images) are needed, then perhaps LoRa WAN could be used instead (and we're planning on supporting that separately.

If in some cases, having remote deployment with the capability to send video is needed in locations for which WiFi is intractable. So in this case, cell is the most valuable solution.

Move to the how:

We are evaluating various cell modules which we can integrate onboard. We will likely downselect one primarily on its flexibility (e.g. capability to be used in the widest geographical areas).

Move to the what:

Make a cell-module version of DepthAI.

OAK-D-CM4-POE

Already released, see product documentation here


Start with the why:

There is a lot of interest in the community to have a decent Linux host integrated with DepthAI. Having all enclosed (possibly IP rated) and PoE powered would give a lot of options to use the device for final integration. The intention is to use PoE as a power source for the device.

Move to the how:

The same PoE design can be used, same as for OAK-D-POE and OAK-1-POE meaning by default only Ethernet port would be accessible when the device is in the enclosure. Connectors like micro-HDMI, micro-USB should be also added for development and debugging purposes.

Due to having a full-blown Linux alongside the depthAI it would be good to have SoM connected over some fast speed interface so we were thinking of using PCIe to USB3 bridge as it is used on RPi 4B+ and connect depthAI SoM over USB3 interface.

Later we came to the idea that it would be the best to have CM4 and SoM connected directly over PCIe and thus making the most of it and reducing the BOM. At the moment we could not find relevant info about if RPi CM4 supports needed plug-and-play PCIe so we will have to test this out with the first prototypes. We could also do some workarounds in FW if needed to get it working, only if PnP is not supported on CM4.

As a workaround (if PCIe fails to be a good idea) USB2 interface will still connect CM4 with SoM as on previous versions of 1097 devices. USB connection is also great to have for flashing NOR on DepthAI SoM if needed for recovery mode.

As for some use cases, it is great having the IMU/magnetometer on the baseboard as it brings extra functionalities to the device.
The device should use a bare minimum of components needed to implement all of the above features.

The what:

Make a design that can be easily placed in an enclosure try to make it as small as possible.
Add PoE functionality as a power source, remove all unnecessary connectors/interfaces like CSI, DSI, barrel jack power, audio,... Use smaller connectors where possible. Add IMU and other features which are possible to fit onboard to make first prototypes better for evaluation and testing of as many features as possible. Some connectors/circuits can be populated only on a few devices for easier bring up and later removed in production.

Embedded DepthAI + Thermal Reference Design

Start with the why:

There are many applications for which thermal information is incredibly useful, particularly when paired with high-res RGB and depth information. Previously-unsolvable industrial, agricultural, and medical problems are much easier solved with this pairing. Examples include easier understanding machine state (i.e. does that pipe have high pressure/heat behind it... should an alarm go off if a worker approaches it).

So adding something like the MLX90640 from the Open Thermal Camera to #10 could be incredibly useful. The pairing of the high-res RGB and depth allows augmenting the lower-resolution of the Melexis in interesting ways, particularly using ML techniques to aid in accuracy/etc.

Move to the how:

Evaluate if the board size of #10 needs to be increased to add the MLX sensor. Connect the MLX into the Myriad directly so that ML/CV techniques can be applied directly to this sensors output (and/or fused w/ the other sensor data).

Move to the what:

Make a version of #10 with integrated thermal sensor.

DM1097 CM4 HDMI and Keyboard switch problems

I was not able to boot this board
https://github.com/luxonis/depthai-hardware/tree/master/DM1097_DepthAI_Compute_Module_4

when using I connected to the monitor thorough a 4 port HDMI switch (a few years old model).

I had problems with mouse, left/right click was not working, when was connected through a USB switch, this model:
https://www.amazon.ca/dp/B081V977MX/

Also the Myriad X stopped working after a few seconds when running the automated demo (the NN FPS was 0 and not more detections), and also the whole system will lock (requiring reset) after a minute or two, when the USB switch was connected.

Without the USB switch everything works fine and it is stable, but the no HDMI boot issue is still there.

Enclosure for OAK-D IoT-75

Start with the why:

From the O.G. OAK-D KickStarter we learned that aint nobody want an OAK-D w/out an enclosure. "To add insult to poorly planned injury" was the most memorable quote about OAK-D-IoT-75 not including an enclosure.

So, we're making an enclosure for it.

Move to the how:

It's got WiFi. That's why we didn't make an enclosure at first. As it was a time-crunched KickStarter delivery. And we didn't want to be late.

So we took the time to iterate/test and design in plastic openings that are sufficiently wide for the WiFi to escape.

Move to the what:

And enclosure that has all the things we learned folks want:

  • 2x M4 for VESA-mount compatibility.
  • 1x Tripod (1/4-20) for desk convenience
  • USB3 for DepthAI to host.
  • microUSB for direct ESP32 programming.

It's pretty cool looking too:
image
image
image
image

Note that this is an older prototype, so the mounting on the back is now M4 and 7.5cm spacing, instead of whatever is shown there (likely M2 at like 43.5cm).

OAK-1 Lite

Already released, see product documentation here


Preorders available: OAK-1-Lite

Start with the why:

We've gotten a lot of requests to make an OAK-1-Lite in addition to OAK-D-Lite (here).

So we're going to make it.

Move to the how:

The OAK-1 enclosure is fine except that it does not have a dual-fastener system. So we're going to add dual M4 holes to the OAK-1 enclosure. And use this for all OAK-1-Lite, and all future OAK-1 enclosures (as a roll-in). This changes only the rear of the OAK-1 enclosure, as rendered below:
image001 (5)

This will address the main (only?) complaint with the current OAK-1. Folks already love that it's small (unlike the O.G. OAK-D; which was too big).

Move to the what:

  • Image Sensor: Sony IMX214
  • Diagonal Field of View: 81.3 degrees
  • Resolution: 4208x3120
  • Aspect Ratio: 1.348:1
  • Focus Range: 8cm - ∞
  • Lens Size: 1/3.1 inch
  • Effective Focal Length: 3.37 mm
  • F-Number: 2.2 +-5%
  • Distortion : <1.0%

Support M12-Mount OV9282 and OV9782 (color global shutter)

Start with the why:

As in #16 there is a huge diversity of M12 lenses available to suit all sorts of problems. These also come in options for various filtering of specific spectra (opening up things like NDVI and others). Since the OV9281 is grayscale covering UV-A and IR spectra, this also opens up options for various filtering of specific spectra (opening up things like NDVI and others).

Move to the how:

ArduCam has an existing OV9281 module with M12 lens mount (see below for it used on a stereo camera) which we can leverage on DepthAI (as the OV9282 and OV9281 drivers are effectively the same).

The what:

Support the M-12 Mount OV9282 (grayscale) and OV9782 (color global shutter equivalent of same specs) camera module from ArduCam. Prototype samples of both OV9282 and OV9782 from ArduCam are shown below working with depthai. The Debayering of the OV9782 is not implemented yet, but the sensor is returning data.

image

OAK-D EDU

Start with the why:

Cortic Technology, here has been using OAK-D for K-12 education. He is on a mission to get this tech to K-12. He’s done a lot of these education series and has discovered what is actually needed in the market to have this sort of technology accessible by K-12 students. And what are the big pain-points and blockers.

One of the key needed bits is easy software. Cortic has been working on the software to make this applicable to K-12 (e.g. here), but there are hardware parts that are really really painful still. And if those are solved, he thinks this could scalably be applied across education.

And it looks great and will work fantastically for this. I’ve (Brandon has) done 1st-grade Hour of Code with such interfaces before.

image

In K-12 education settings there is a need for a “just works” spatial AI system where everything is included. Since it is kids working with it, it needs to be all enclosed, simple, and boot up and just work.

The use-case is on a robot like here:
https://twitter.com/CorticTechnolo1/status/1386761565475545092?s=20
image

And the robotics portions of these robots are actually controlled over Bluetooth or WiFi. So no electrical connection is needed.

And in fact, it is preferred to have the whole perception/host-compute/etc. battery powered so that it can be built into the lego/etc. anywhere on the bot - without wiring concerns/mess/etc. Ad wireless control of the bot allows.

One of the most painful parts of using such a system is getting the headless going - meaning getting remote desktop connection going. So the feedback from Ye Lu is that having an enclosed product with an integrated battery, a single USB port (for charging), a built-in screen, and WiFi/BT for communication would be super helpful.

This would allow the product to boot up w/out any cables, show the demo right away on the screen so that students can see how the computer is seeing. And then from there, configure the device to do things and see how the computer sees things.

And having IMU, microphones, and speaker on the device will allow students to explore other aspects of interaction all on-device. Motion, audio input, audio output for feedback and or allowing students to construct interactive games with the robot.

Since this device will be used in education, it would be beneficial for it to be able to mount directly to legos.

Move to the how:

Leverage the CM4 designs we have (OAK-CM4-POE) to make a DepthAI + CM4 integrated product, leveraging the battery charge/discharge we have from the experimental 1092+LiPo work we did.

Use the CM4 with built-in eMMC so students can store video/etc. to the device for as long as the battery lasts (does not need to be any longer than that).

Use the CM4’s capability to drive a display to have an integrated display on the back.

Design an aluminum enclosure to help dissipate heat and make this easy/rugged for users and keep it cool enough.

Leverage the microphone design (likely using just a stereo pair of mics - that’s enough probably) and speaker design from the DM2092.

Move to the what:

All-in-one enclosed product with:

  • Cameras:
  • World-facing:
    • 2x OV7251
    • 1x IMX214
  • User-facing:
    • 1x IMX214
  • Touchscreen
  • Internal battery (5,000mAh?)
  • CM4 host
  • eMMC storage TBD size
  • WiFi, and
  • BT
  • Stereo Microphone (connected to DepthAI)
  • Speaker (connected to DepthAI)
  • Aluminum Enclosure
  • BNO086 IMU
  • Single USB3C port
  • Tripod mount on bottom
  • 7.5cm spaced M4 mounting on bottom
  • Power switch
  • Multi-purpose programmable button
  • Lego technic mount holes around (TBD) to allow mounting directly to legos.
    image

USB3C port:

  • It would be nice for the USB port to act like on phones:
    • Can be used for charging, but also as a host powering attached devices (mouse/keyboard for the simpler cases).
    • And if the y-adapter is used, it could act as a "docking station": charging the battery, but also functioning as a host.

asdfadsfafdaasdfasd

Initial Concept below:
image
image

How Depthai hardware is different than intel real sense

Hello,
I got to know of luxonis products yesterday while searching for depth camera, I saw real sense then I saw luxonis.

Can you please elaborate - how these products are different (intel realsense d4xx series + movidius)

OAK-D Pro-PoE

Already released, see product documentation here


Feb 3rd, 2023 EDIT: OAK-D-Pro-POE that is currently available has IP65 rating. We are working on a new design that should be IP67 rated.


Preorders available: OAK-D-Pro-POE

Start with the why:

OAK-D-PoE was our “make it exist” product. It served as a pathfinding design to see what is the community's reaction to it and what could be improved. It has been quite well received. With specific feedback on what would be nice in a Pro version.

The OAK-D-PRO-POE will be the next step and an upgrade to the original OAK-D-PoE, implementing user feedback and improvements through experience gained from previous design.

Overall folks are happy with OAK-D-PoE. But from the requests we do get, they are nearly all about illumination. And specifically the requests ask for:

  • IR laser dot projection for active depth (allowing low-light and no-light depth sensing)
  • Blanket IR LED illumination (allowing low-light and no-light computer vision).

Everyone loved that OAK-D-PoE is IP67 sealed (so we're keeping that) but many desired an M12 connector instead of the RJ45 IP67 gland, which is a bit large - and more importantly is the reason OAK-D-PoE is a bit bigger and heavier than some desired.

  • M12 connector - smaller and if used, the whole device can be smaller and lighter
  • Weight - Some found OAK-D-PoE too heavy
  • Size - Some found OAK-D-PoE too big

And the final common request was some form of external IO for being able to trigger or be-triggered by external equipment:

  • External IO connectivity is desired

Move to the how:

Connectivity and power:

Just like the OAK-D-POE design, the OAK-D-PRO-POE will have and ethernet connection for data and PoE for power. But that is where similarities end, as the OAK-D-PRO-POE will feature a more robust and industrial connector type called M12.

To afford external IO connectivity, we will implement an M8 or similar connector. Likely with USB host-support from the Myriad X so that external USB devices (like thermal cameras, or IO expanders) can be used.

Form factor:

Form factor will be heavily based on the OAK-D-PRO design (#114) with it's sleek small case. Afforded by the M12 connector, it is now possible to make this design a lot smaller. And the design will still have the same IP67 rating as was with OAK-D-PoE.

Other features:

Implementing IR led and dot projector will be the same as in the OAK-D-PRO model (details in #114), as it is already in the testing phase and shows a lot of promise.

Move to the what:

A small, lightweight, cost-reduced version of OAK-D-POE that is still IP67 sealed, that has IR laser dot projector and IR illumination LED on-board (Pro version), and IP67-sealed IO connectivity.

  • Optics/Illumination/Active Depth: Same as #114.
  • M12 X-coded for Gigabit Power over Ethernet
  • M8 connector for GPIO, hardware multi-device sync, and direct external power (instead of PoE).
We intended to put these on the M8:
  • power input (5V) as an option instead of PoE, or 5V power output for some external circuitry
  • USB2 (D+ and D-)
  • camera IOs: FSIN (frame sync) and STROBE (for driving a flash)
  • 2 other aux IOs, capable of UART, I2C or GPIO

Support Global Shutter OV9782 Instead of IMX378 for Exact-Matched FOV, Optics, etc. Between Color and Grayscale

Start with the why:

In some applications, exact, pixel-by-pixel alignment of the color camera and the grayscale/depth cameras is desirable. To support this, having the color camera have the exact same parameters (global shutter, image sensor pixel size, chief ray angle (for lighting consistency), field of view, focal length, capability to hardware-level sync, etc.) is extremely beneficial.

Because of such a use-case, OmniVision actually makes the equivalent sensor to the grayscale OV9282, the OV9782.

It is made for this exact sort of application.

Move to the how:

ArduCam is building OV9282 and OV9782 module for DepthAI that match in all the above ways, and we are integrating them into the open source ecosystem (and ArduCam will likely release their own DepthAI variants with them).

Below is initial testing of the OV9782 (left) and the OV9282 (right) from ArduCam. These variants use the M12-mount lenses for flexibility. But there will be fixed-lens (i.e. smaller) modules as well.

image

Move to the what:

Support OV9782 (the color version of the OV9282) global shutter as the color camera in place of the IMX378.

Support M12-Mount IMX477

Start with the why:

As Katherine_Scott (of ROS) mentioned here there is a huge diversity of M12 lenses available to suit all sorts of problems. These also come in options for various filtering of specific spectra (opening up things like NDVI and others).

So having an M12-supporting version of of the IMX477 (low-CRA version of IMX378 on megaAI and DepthAI would open up all sorts of applications.

Move to the how:

ArduCam has offered to make an IMX477 M12-compatible camera module for megaAI and DepthAI ecosystem use. Our existing driver can be easily adapted to use this version.

Move to the what:

Support the ArduCam IMX477 M12-compatible camera module.

OAK-D Pro

Already released, see product documentation here


Preorders available: OAK-D-Pro

Start with the why:

1. Mechanical design.

The mechanical design of OAK-D is limiting, with the following draw-backs:

  • Only mounting is a single 1/4-20 tripod mount
  • Lack of 2-screw solution for secure panel mounting (which can result in unintended rotation if additional mechanical design isn't done to prevent this)
  • Bigger than necessary
  • Heavier than necessary

2. Active Illumination

OAK-D was architected for applications where passive depth performs well (and can often outperform active depth; as IR from the sun can be blocked by the optics when purely-passive disparity depth is used).

There are many applications however where active depth is absolutely necessary (operation in no light, with big/blank surfaces, etc.). And there are many OAK-D customers who would like to use the power of OAK-D in these scenarios where purely-passive depth is prohibitive.

Move to the how:

  • Add a VESA-spec (7.5cm, M4) set of mounting holes to the back of the enclosure, moving the tripod-mount to the bottom.
  • Retool the image sensors and layout to minimize size and weight while improving thermal performance.
  • Add IR laser dot projection (for no-light depth) and IR LED blanket illumination (for no-light computer vision/perception).

Architect the IR laser and IR LED such that all these modes of operation are supported:

The idea is that they'd be used in one of these permutations:

  1. IR laser on for all frames
  2. IR LED on for all frames
  3. IR laser on for even frames, odd frames no illumination (ambient light)
  4. IR LED on for even frames, odd frames no illumination (ambient light)
  5. IR laser on for even frames, IR LED on for odd frames
  6. IR laser and IR LED on for all frames
  7. IR laser and IR LED both off for all frames.

It is likely that mode 1 and 5 will be used the most. But enabling all the permutations above allow maximum flexibility for adapting illumination to an application (including dynamically). For example Mode 6 will likely rarely be used, but there are certain cases where having both on may be beneficial.

Move to the what:

Same image sensors and FOV of OAK-D:

  • 2x OV9282 (but IR capable)
  • 1x IMX378 fixed-focus
  • IR laser dot projector
  • IR LED
  • Small, easier-to-integrate form-factor like below:
    image
    image

1092: failed to find a device / can't enable a device

When running a demo/script on 1092, a host can't find a device or can't enable it. Following error messages appear:

Cannot enable. Maybe the USB cable is bad?
attempt power cycle

or

RuntimeError: Failed to find device after booting, error message: X_LINK_DEVICE_NOT_FOUND

Visual Assistance Device

Start with the why:

Spatial AI is starting to be possible with embedded platforms. Spatial AI provides information on what objects are, and where they are in physical space. So this information can then be transduced into other formats (text, audio, haptic/vibration, tactile display, etc.) which can help folks with visual impairments.

For example, such a spatial AI solution could provide this sort of insight when walking in a park:
“There is a park bench 12 feet in front of you at 1 o’clock and it has empty seats”

Or it could find text on a page or a sign and offer to the user to read it, automatically. It could also provide insight as to where the user physically is in proximity to other objects like vehicles, people, bikes, etc., including even warning when a vehicle collision is imminent (e.g. here). Or feedback on where someone is on a path (like here).

This current effort started when Marx Melencio (who is visually impaired) reached out here, showcasing the system he has already made, and with interest in using DepthAI to actually productize.

Move to the how:

DepthAI is an Embedded Spatial AI platform (perhaps the only one as of this writing?) which provides neural inference (e.g. object detection) in combination with disparity depth to give object localization (i.e. what an object is and where in physical space).

And it can run series/parallel networks as well (a coming feature, see first version here), which allows a pipeline of networks to provide additional information based on the context (automatically or manually). For example, when outside/walking in a downtown area (the context) the system could automatically detect road signs/stop signs/stop lights and tell the user the state of them (which would be a cascade of a find-signs network followed by a ‘read the signs’ and/or ‘state of digital sign’ network).

So we plan to use DepthAI to make an easy-to-use visual assistance system, taking advantage of the fact that DepthAI is embedded (i.e. doesn’t require an OS or anything), low-power (i.e. can be battery powered over the course of a normal day) and is small enough to be body-worn w/out encumbrance.

There are a variety of ways we could attack such a device (a variety of potential ‘how’), which fall into two categories:

  1. Piecing together existing hardware (largely for prototyping)

  2. Building custom hardware (for a path to productization and selling as a standard product)

Piecing together existing hardware:

There exist a couple variants of DepthAI which could be used for such an application, and fortunately DepthAI is open-source (MIT-licensed, here), so these designs can be modified into something that is more applicable for this visual-assistance device. Below are the applicable designs and how we’ve thought to use them:

  1. DepthAI Raspberry Pi Compute Module Edition (BW1097), here -

    a. This has a whole computer built in.

    b. It could be used with a GoPro adapter mount (here) and this GoPro mounting kit (here) to allow mounting practically anywhere on the body (head, chest, wrist, etc.).

    c. This has all the processing and communication all built-in (it’s running Raspbian), but does not have a solution for battery, so that would need to be worked out - mainly in terms of how to mount and communicate with the battery.

    d. One option would be to make a new mount that has the battery built in.

  2. DepthAI Onboard Cameras Edition (BW1098OBC) here.

    a. This also could be used with a GoPro adapter mount (here) and then connected (and powered) over USB to some other on-person computer/power source (say a Pi w/ a Pi HAT LiPo power source).

    b. So this solution would have 1x USB cable going from the perception device (DepthAI BW1098OBC) to the host processor device (say a Pi with a battery).

  3. DepthAI Modular Cameras Edition (BW1098FFC) here

    a. This would allow ‘hacking together’ a prototype of smart glasses.

    b. For example the main board could be on the nape of the back of the neck, connected via FFC to the cameras on smart glasses.

    c. The trouble is it’s a lot of flexible flat cables (FFC) cables and these cables are relatively fragile, so it’s not ideal.

Building custom hardware:

  1. Making actual spatial-AI glasses where everything is integrated. This picture here summarizes it. A battery would likely be integrated directly in the the frame (with the ESP32) or on a lanyard which attaches to the back of the frames.

    a. The disadvantage of this is that it is specifically designed for wearing on the head… and it may be nice to for example have a head-mounted unit and a wrist-mounted device (for example the head-mounted gives situational awareness, and the wrist-mounted let’s you explore around you (e.g. read a piece of paper) without having to move your head all over

    b. This is also a more complex custom design w/ some technical risk and user-experience risk including us as a team not really knowing how to make comfortable glasses/etc.

  2. Make a small fully-integrated Spatial AI box w/ GoPro mount so it can mounted to wrist, chest, or head (using a GoPro adapter set like here) which has WiFi and BT interface.

    a. This is the simplest/fastest approach, having the lowest technical risk and user experience risk as it can just be a self-contained, small system which uses field-proven GoPro mounts for attachment to head, chest, or wrist.

    b. It also allows using multiple devices on a person. So for example one on the back, one on the head, one on the wrist, one on the chest. And they connect over BT or WiFi to say an on-person Raspberry Pi which handles prioritizing which data comes back based on the person interacting w/ the onboard software.

    c. It is a not-huge amount of work to re-used the design for the BW1098OBC (here) while adding an ESP32 and a battery w/ charge/protection circuitry.

    d. Probably worth reducing the stereo baseline from the 7.5cm there to something like 4cm or so, as it will still provide plenty of distance vision while allowing a closer-in min stereo-disparity depth-perception distance (see calculation here) of 0.367 meters (for 4cm baseline) instead of the 0.689 (for 7.5cm baseline).

So after exploring the options above, we decided that ‘2’ from custom hardware section seems like the way to go. So below in ‘what’ we describe what this device we are building:

Move to the what:

So out of wanting to design this, we realized we should do it in stages, so the first part of this effort will be a DepthAI ESP32 Reference Design / Embedded DepthAI Reference Design as in #10, which will be re-usable as the core of the implementation needed below:

A “small battery-powered Spatial AI box with WiFi and BT”

  • Modular, self-sufficient Visual Assistance device with built-in power, WiFi, and BT interfaces.
  • Battery compartment for 2x 18650 3,400mAh batteries (e.g. here)
  • ESP32 for WiFi/BT and user-modifiable custom code.
  • 4cm stereo baseline, 3 cameras total: stereo pair + 12MP color
  • Make the total package as small as reasonably possible.

Battery Notes:

  • Maybe for now this is direct mounted to PCB?
    • Like one of those solder-mount battery holders?
  • If we use protected cells for now, we don’t even need to have a charger on there… just something to monitor state of charge
    • Having an integrated charger would of course be cooler.
  • 3,400mAh should run this whole design for approximately 9 hours of max-possible-performance from DepthAI (+ negligible power use of ESP32).

Camera module placement notes:

  • Color camera close to the ‘right’ (device’s view) stereo camera (for better object alignment between color-sensor-based inference and the ‘center of the universe’ for the stereo disparity depth result. Below is an example of a 3.5cm-baseline board we had made, with the color camera sitting on there for reference:

Example Camera Layout for Visual Assistance Device

  • Note that 4cm baseline was just an initial idea.
    • Maybe letting another size-constraint (e.g. the battery size) determine the separation may make sense.
      • For example the 18650 batteries are 6.5cm, so the device will likely be at least 6.5 cm long, which would afford 5cm stereo baseline probably.
      • So this is something we could change around as the layout solidifies.

Mounting:

  • In whatever case we make for this we should:
    • Have the back of it be metal to act as the structure and the heatsinking to the back. (If the whole thing is metal that’s great too)
    • Make a GoPro mount on the back so that this kit or similar can be used to support wrist-mount, chest-mount, and head-mount (in addition to a bunch of other likely-possible permutations).

PR: Step files for BW1098OBC BW1093 in OAK Cameras

Hello there, I build a simple files for a version of BW1098OBC for people who just want a model with real dimensions and no details, maybe the Depth AI can add to documentation to support community

image

Dimensions according to https://github.com/luxonis/depthai-hardware/tree/master/BW1098OBC_DepthAI_USB3C

Public link of the files https://grabcad.com/library/bw1098obc-for-oak-d-stereo-camera-1

Also, for BW1093

image

Public link of the files https://grabcad.com/library/bw1093-for-oak-1-color-camera-opencv-1

Add IMU (+ Magnetometor) Support

Start with the why:

Although not intended for V-SLAM, DepthAI could provide decent V-SLAM in parallel with other functions (through depth + feature-tracking (VTRACK)) stream outputs.

To do so, having an integrated IMU would enable this additional/parallel use-case.

Move to the how:

We also have initially prototyped support for the BMI160 and BMM150 and have the IO for it brought out through the 100-pin connector of the BW1099 100-pin interface.

We've also prototyped the BMX055/BMI055, which are other options. And discussing offline, it seems the BNO085 may be an excellent choice, as it provides an absolute orientation result in one package.

We could alternatively add these directly to the BW1099 module, but it probably makes sense to leave the choice as to whether they are included (and where) to the designer of the baseboard... as IMU location can be of critical important depending on the constraints on the design and colocation of image sensors/IMU/etc. to pertinent center locations on the design. So having the IMU/magnetometer on the baseboard affords these degrees of freedoms.

WRT the USB transfer of IMU data, XLink won't be a good option if we want low latency, as large transfers (from video streams) would delay the transmission.

We could add an interrupt endpoint to the USB device (could use HID class for example). XLink currently uses a single endpoint, bulk. And we can have on DepthAI a total of 3 different endpoints for output to host.

The what:

Add the IMU & Magnetometer to the following baseboard designs, in this priority:

  • BW1092 #10
  • BW1098OBC
  • BW1097
  • BW1098FFC
  • BW1098EMB

Implement an interrupt endpoint to the USB device for super-low-latency 500 or 1,000 Hz update to the host from the IMU/Mag (comb).

Schematics & PCB

Hi, are the schematics and PCB for this project also open-source as many of the other projects. I can only find the NG2094_OAK-D-PRO-W-DEV repository, but sadly it is not a single-board assembly as the OAK-D-PRO product is.

I assume this is to protect the IP in the SoMs?

Battery options/recommendations for all models

I tried different 5V USB batteries (LiIon based) that output 2A, 2.4A and 3A and seems to be fairly stable on DM1097 CM4 models, the most power hungry.
I will run some endurance tests and provide more details once I mount them on my robots.

Let's collect here what are your recommendations to power by battery the DepthAI modules.

Also please provide the min/max voltage that can be used and what temperatures we should expect when the voltage is on lower end, for each model.

I just noticed that the Myriad heat-sink temp is about 57 degrees Celsius, while running the demo (on API version 1.0) and using a DeWALT 20V 2Ah battery using a 18V convertor that has a USB port that outputs 5V at 2A max.
The RPI CM4 temp is about 62 degrees Celsius:

pi@raspberrypi:~ $ vcgencmd measure_temp

No load:
temp=62.8'C

Under load:
temp=68.6'C

Here is the adapter, I assume the voltage is a bit lower than 5 under use (need to measure it):
https://www.amazon.ca/TENMOER-DCA1820-Battery-Adapter-Compatible/dp/B08HMVYYG5

IR Illumination + IR-Capable Grayscale Stereo Pair

Start with the why:

While adding pattern projectors like in #20 affords disparity depth to work in absence of any ambient lighting, neural inference likely will not (as there will be little dots of illumination, not overall illumination).

So to allow 3D object localization (i.e. an object detector like YOLO or MobileNet-SSD fused with spatial information), having overall illumination (and not just pattern projection) is required.

Move to the how:

So there are 3 options for supporting the IR-capable camera modules:

  1. Use IR-only camera modules (i.e. bandpass around IR).
  2. Use Visible-light + IR-light capable camera modules (i.e. a single bandpass covering all visible + IR spectra)
  3. Use a mechanical IR-cut filter that engages during high ambient light and disengages during low-light conditions when the IR illuminators are active.

We have tested both 1 and 2 with DepthAI (example images below) and found that 1 produces much sharper images when IR illumination is being used (this IR flashlight was used). See the quick test results below:

  1. Examples image or IR-bandpass-only camera modules with IR-flashlight below:
    image
  2. Example of Visible-light + IR-light capable camera modules with IR-flashlight below:
    image

We have not yet tested option 3, but it could be an interesting solution - but with the cost that usually such mechanical moving parts are a point of failure.

Move to the what:

Support IR illumination for object detection and disparity depth in total (ambient) darkness conditions.

OAK-D Pro-W

Already released, see product documentation here


Preorders available: OAK-D-Pro-W-Dev - setup docs

Start with the why:

We’ve now gotten a bunch of reachouts about something like OAK-D, but with 150 DFOV, IR-laser-dot-projection depth, and matching resolution/FOV/global shutter for the color camera.

This is similar to this idea, #21, which we already had on the roadmap, but with the change that apparently a lot of applications want this and wide FOV, which is separately #15.

Move to the how:

ArduCam already implemented #15 , and now produces this IR-capable OV9282 150DFOV IR-capable module as a standard product for DepthAI customers, here. So far though, this has only been used in custom products built with OAK-SOM/etc.

We should work with ArduCam to (1) update this OV9282 to the new/standard ArduCam connector format here and (2) make a OV9782 variant with the same connector/format but that blocks IR (for good color representation). Likely prototyping first with the existing format/connector

​​Move to the what:

Wide FOV version of OAK-D-PRO (#114)

OAK-D-PRO-W (W = WideFOV)

  • 2x OV9282 1280x800 global shutter grayscale 150 DFOV / 127 HFOV (IR-capable)
  • 1x OV9782 1280x800 global shutter color 150 DFOV / 127 HFOV (IR-blocking)
  • 2x IR laser dot emitter (for wide FOV)
  • 2x IR LED (for wide FOV)
  • Similar (same) form-factor as #114

Update:

The current iteration of hardware does not support the dot-emitters. The main reason is that the laser certification was taking longer than expected and the chip shortage for the projector prevented us from making enough boards. As we got a lot of requests for WFOV cameras only, we instead shifted the focus on this aspect of the design to have it out as soon as possible. The flood LED is still implemented on the design and can be used for testing in low light situations.

BW1092: DepthAI ESP32 Reference Design | Embedded DepthAI Reference Design

Start with the why:

One of the core value-adds of DepthAI is that it runs everything on-board. So when used with a host computer over USB, this offloads the host from having to do any of these computer vision or AI work-loads.

This can actually be even more valuable in embedded applications - where the host is a microcontroller communicated with over SPI, UART, or I2C and is either running no operating system at all or some very-light-weight RTOS like freeRTOS - so running any CV on the host is challenging or outright intractable.

DepthAI allows such use-cases as it convert high-bandwidth video (up to 12MP) into extremely low-bandwidth structured data (in the low-kbps range; e.g. person at 3 meters away, 0.7 meters to the right).

So this allows tiny microcontrollers to easily parse this output data and take action on it. For example an ATTiny8 microcontroller is thousands of times faster than what is required to parse and take action off of this metadata, and it's a TINY microcontroller.

So for applications which already have microcontrollers, where full operating-system capable 'edge' systems have disadvantages or are intractable (like size, weight, power, boot-time, cost, etc.), or where such systems are just not overkill, being able to cleanly/easily use DepthAI with embedded systems is extremely valuable.

So we plan to release an API/spec for embedded communication with DepthAI over SPI, I2C, and UART to support this.

However, a customer having to write against this spec from scratch is a pain, particularly if hardware connections/prototyping need to be done to physically connect DepthAI to the embedded target of interest. And having to do these steps -before- being able to do a quick prototype is annoying and sometimes a show-stopper.

So the goal of this design is to allow an all-in-one reference design for embedded application of DepthAI which someone can purchase, get up and running on, and then leverage for their own designs/etc. after having done a proof-of-concept and shown what they want works with a representative design.

So to have something embedded working right away, which can then serve as a reference in terms of hardware, firmware, and software.

The next decision is what platform to build this reference design around. STM32? MSP430? ESP32? ATMega/Arduino? TI cc31xx/32xx?

We are choosing the ESP32 as it is very popular, provides nice IoT connectivity to Amazon/etc. (e.g. AWS IoT here) and includes Bluetooth, Bluetooth Low Energy (BTLE), and WiFi, and comes in a convenient FCC/CE-certified module.

And it also has a bunch of examples for communicating over WiFi, communicating with smartphone apps over BT and/or WiFi, etc.

And since we want users to be able to take the hardware design and modify it to their needs we plan to:

  1. Open source the design
  2. Use modules.
    • this allows easier modification w/out having to work w/ the nitty-gritties of FPBGA HDI board design/etc.
    • the BW1099 module and ESP32 modules are both FCC/CE certified so this makes the final design easier to certify (particularly for WiFi).
    • a chip-down design is then relatively straightforward should this be needed later (say for super-tiny embedded applications), and would be firmware/etc. completely compatible w/ the modular design.

Often embedded applications are space-constrained, so making this reference design as compact as reasonable (while still using the modular approach above) is important.

Another why is this will serve as the basis for the Visually-Impaired assistance device (#9).

Move to the how:

Modify the BW1098OBC to add an ESP32 System on Module, likely the ESP32-WROOM-32.

To keep the design small, it may be necessary to increase the number of layers on the BW1098OBC (from the 4 currently) to 6 or perhaps 8 and/or use HDI techniques such that the ESP32 module can be on the opposite side of the boards (but overall in the same physical location) as the BW1099 module, and perhaps the cameras above or below this ESP32 module.

In this system DepthAI SoM will be the SPI peripheral and the microcontroller (ESP32 in this case) will be the SPI controller.

Have a GPIO line from the ESP32 go to the DepthAI SoM, for DepthAI to be able to poke the ESP32 when it has new data.

That way we can support interrupt-driven data transfer on the ESP32.

So the flow would be:

  • DepthAI has data ready for the ESP32 (which is acting as the controller ) to pull off the DepthAI.
  • DepthAI pulls this line high to indicate that the data is ready.
  • This line is then going to a GPIO w/ interrupt capability on the ESP32.
  • The interrupt service routine does the controller bits of SPI to pull the data from the peripheral DepthAI.
  • The DepthAI drives this GPIO line back low to indicate that there is no data left.

Note that this would also still allow the ESP32 to do polling instead.

Move to the what:

  • A compact DepthAI design with ESP32 and 3 cameras.

Here's a super-initial idea on what this could look like, based on setting cameras and a sheet of paper the size of the ESP32 module above (18mm x 25.5mm) on an old BW1098EMB design, which is 60x40x20.5mm including heatsink and 60x40x7mm without heatsink (which is likely how it would be used in a product... simply heatsink to the product enclosure).

image
image
image
image

OAK-D Series 2 (OAK-D-S2)

Preorders available: OAK-D Series 2

Start with the why:

Improve the mechanical of OAK-D, as being done in #114 but without adding the cost of active depth (both in terms of $, and in terms of worse bright outdoor depth performance).

Move to the how:

Reuse all of OAK-D-PRO but no-populate the active illumination components, black out the glass where these would be, and use the existing OAK-D CCM.

Move to the what:

Make an existing passive-only OAK-D successor with the mechanical advantages of #114. And offer fixed-focus RGB like OAK-D-Pro as well for better vibration handling.

image
image

This will most likely still be called OAK-D but will be Series 2 or S2 for short.

AR0234 2.3MP Global Shutter Color 1920x1200 60FPS Support

Start with the why:

Higher resolution global shutter color can be necessary in conditions where the neural model or CV pipeline needs to be able to perceive high-speed moving objects and color with high(-ish) resolution. Global shutter gets quite pricey above ~2MP, so the 2.3MP AR0234 is a "sweet spot" sensor in terms of allowing high resolution (for global shutter) while also color and global shutter performance.

So for these application that require higher resolution than the OV9782 (which DepthAI already supports, see #17 and #21) on fast-moving objects, color, and global shutter, it would be beneficial to support the AR0234, which is 2.25 the resolution of the OV9782.

Move to the how:

Work with ArduCam to build the AR0234 modules as needed, and leverage our Gen2 pipeline builder to integrate a color camera node that supports AR0234.

Move the what:

Support global shutter 1920x1200 60FPS AR0234 in the DepthAI ecosystem.

Commute Guardian

Start with the why:

It's time. We've implemented all the basis functions of the platform (https://github.com/orgs/luxonis/projects/2), all of which were guided by the North Star of keeping people who ride bikes safe (here).

Namely, we now have all the pieces that go into building this below, in hardware, firmware, and software:

  • Onboard depth sensing to 30+ meters
  • Wide FOV cameras for detecting more side-impact situations
  • Lossless zoom at 12MP, for recording and recognizing license places quite far away
  • Retraining of vehicle object detector that runs at high FPS (greater than 30FPS) and long-distance (greater than 30 meters)
  • Feature tracking, edge filtering, semantic segmentation that can all run in parallel - for vehicle-edge tracking in physical space
  • Integrated host support (running YOCTO Linux)
  • High-bandwidth WiFi + BT support (for streaming to smartphone)

And likely a bunch of others.

Move to the how:

For the first version:

  • Leverage our OAK-SoM-Pro (here) with MA2095 (Keem Bay) populated on a baseboard (i.e. go with a baseboard + SoM approach).
  • Use the ~150-degree OV9282 modules from OAK-D W (#152). Orient such that the widest field of view is parallel to the horizon.
  • Use the ~120-degree IMX378 (fixed focus) from the variant of OAK-D W that we're preparing (not yet documented). Orient such that the widest field of view is parallel to the horizon.
  • BT and WiFi connected to MA2095/OAK-SoM-Pro directly.
  • Plan on e-bike deployment only, with 5V and at least 1A input as a requisite.
  • Open Source the whole thing.

Move to the what:

Make the first version of Commute Guardian. Probably something that looks like this:
image

  • IP67 sealed

BW1099EMB Heatsink / 3D STEP

Hi DepthAI team,

I see the 3D models of the carrier boards have the heatsink model integrated, but I'm not seeing the heatsink in the standalone SoM models. Is there a 3D STEP available for the BW1099EMB with the heatsink mounted, and if not is there a STEP just for the heatsink available? Thank you!

Request: more detailed imager module specs

Why
when building muti-imager setups, sometimes we need to design with clear understanding of the imager FOVs to determine imager/lens type quantity and placement. and in general there doesn't seem to be a single place where this information is publishing alongside other detailed information.

What/Where this is a request to include similar information to Intel included in their device documentation ( example below). Ideally this would include H/V/Diag FOVs, sensor size and perhaps information on depthai supported modes.

From researching it looks like there is some (scattered) information for this, but seemingly incomplete having canonical info in this repo seems to make sense and would be an excellent reference.

image

The readme doc for each module folder could included information like the above.

OAK-D-IoT-40 Series 2

Start with the why:

The OAK-SOM-IoT (1099EMB) has a reference design for embedded use cases (actually it has 2, the OAK-D-IoT-40 and OAK-D-IoT-75, but the OAK-SOM-PRO (2099) does not yet have such a reference design.

And in many cases, the OAK-SOM-PRO may be more appropriate for such embedded applications - including cases like the CommuteGuardian, where it may be desirable to store video to onboard eMMC or SD-Card.

And also in many applications having onboard microphones is quite helpful or an absolute requirement (which the OAK-SOM-PRO support).

Another related update (which we should propagate back to the IoT-40 and IoT-75) is that the ESP32 programmer microUSB connector can easily break off (as it's surface mount, and microUSB is just generally too fragile), and also it’s a bit annoying to have to have 2x USB cables plugged into the board.

We also realized that we could put a USB2 hub on board, and allow the USB3 to go straight to the MX, allowing both the ESP32 programmer (microUSB in the current 1092 design) and the Myriad X USB2 interface could be combined into the single USB3C connector. This will make the development experience easier (and make building an enclosure easier).

Move to the how:

Using the same idea as the OAK-D-IoT-40, make an equivalent using the OAK-SOM-PRO SOM.
We should also change the CCMs with the new/better design from Arducam, as it allows better hardware-level sync, takes less board space, and is more resilient mechanically (both for production and field robustness).

Move to the what:

  • Like the OAK-D-IoT-40 (small, onboard cameras)
  • But with OAK-SOM-PRO SOM
  • 6 microphones
  • Add onboard speaker, mono is fine (maybe second channel to through-hole solder points?)
  • Built-in SD-Card connected to OAK-SOM-PRO SOM (not ESP32)
  • Do the boot-button approach that we’ve done on other embedded designs, with default boot mode being NOR flash (0x03), and when the button is pressed, USB-boot is active instead.
  • Combine the ESP32 USB2 and the MX USB2 with an onboard USB hub to have only a single USB connection - the USB3C.
  • Use new CCMs from Arducam:
  • RGB IMX378
  • Stereo OV9282

wefasfasdfas;

OAK-D Lite Not Working

DepthAI DEMO is not working on OAK-D Lite, and here is the log:

(.venv) PS D:\_depthai\depthai> python .\depthai_demo.py
Using depthai module from:  D:\_depthai\.venv\lib\site-packages\depthai.cp39-win_amd64.pyd
Depthai version installed:  2.11.1.0.dev+dfd52ac01c3b7514e85cb894b9d5381e999859df
Depthai development version found, skipping check.
Setting up demo...
Available devices:
[0] 1844301061514DF500 [X_LINK_UNBOOTED]
USB Connection speed: UsbSpeed.HIGH
Disabling depth...
Disabling depth preview...
Disabling depthRaw preview...
Disabling left preview...
Disabling rectifiedLeft preview...
Disabling right preview...
Disabling rectifiedRight preview...
Enabling low-bandwidth mode due to low USB speed... (speed: UsbSpeed.HIGH)
Creating MJPEG link for ColorCamera node and color xlink stream...
[1844301061514DF500] [8.400] [system] [error] Attempted to start Color camera - NOT detected!
Stopping demo...
=== TOTAL FPS ===
  File "D:\_depthai\depthai\depthai_demo.py", line 536, in run
    self.instance.run_all(self.conf)
  File "D:\_depthai\depthai\depthai_demo.py", line 59, in run_all
    self.run()
  File "D:\_depthai\depthai\depthai_demo.py", line 245, in run
    self.loop()
  File "D:\_depthai\depthai\depthai_demo.py", line 272, in loop
    self._pv.prepareFrames(callback=self.onNewFrame)
  File "d:\_depthai\depthai\depthai_sdk\src\depthai_sdk\managers\preview_manager.py", line 113, in prepareFrames
    packet = queue.tryGet()
Communication exception - possible device error/misconfiguration. Original message 'Couldn't read data from stream: 'nnInput' (X_LINK_ERROR)'
file:///D:/_depthai/depthai/gui/views/CameraPreview.qml:57:13: TypeError: Cannot read property 'height' of null
file:///D:/_depthai/depthai/gui/views/CameraPreview.qml:56:13: TypeError: Cannot read property 'width' of null
(.venv) PS D:\_depthai\depthai>

The error is at the line

[1844301061514DF500] [8.400] [system] [error] Attempted to start Color camera - NOT detected!

OAK-D W

Already released, see product documentation here


Start with the why:

We’ve now gotten a bunch of reach-outs about something like OAK-D, but with 150 DFOV, IR-laser-dot-projection depth, and matching resolution/FOV/global shutter for the color camera.

This is similar to this idea, #21, which we already had on the roadmap, but with the change that apparently a lot of applications want this and wide FOV, which is separately #15.

Many folks want to use such a product on small aerial vehicles. And as such size and weight are important.

Move to the how:

ArduCam already implemented #15 , and now produces this 150DFOV OV9282 module as a standard product for DepthAI customers, here. So far though, this has only been used in custom products built with OAK-SOM/etc.

We should work with ArduCam to (1) update this OV9282 to the new/standard ArduCam connector format here and (2) make a OV9782 variant with the same connector/format but that blocks IR (for good color representation). Likely prototyping first with the existing format/connector

​​Move to the what:

Make a Wide FOV version of OAK-D Series 2 (#115)

OAK-D W (W = Wide FOV)

  • 2x OV9282 1280x800 global shutter grayscale 150 DFOV (IR capable)
  • 1x OV9782 1280x800 global shutter color 150 DFOV (IR-blocking)

BW1098FFC with Onboard IMU and ArduCam FFC connections

Start with the why:

The ArduCam OV9281 and IMX477 and other modules that offer M12, wide FOV, C-mount, etc. are extremely useful for making depthai applicable for varieties of use-cases. So it would be beneficial to have direct support for them one the BW1098FFC without any adapters.

Also, in many applications, the IMU present on the OAK-D, BW1092, etc. would be beneficial, so when modifying this board, it would make sense to add the IMU.

Move to the how:

Pull the IMU from the BW1098OAK and change the FFC adapter connectors to be what ArduCam uses on their FFC cameras.

Move to the what:

  • Add 1A 3.3V regulator from 1092 design to power ArduCam camera modules
  • Change all 3 FFC connectors to https://www.molex.com/molex/products/part-detail/ffc_fpc_connectors/5052782233 to match updated connectors on DM1098FFC, etc.
  • Update the pinout for all 3 connectors to use a single 26-pin connector. The two stereo cameras will remain 2-lane MIPI while the RGB camera will match the full 4-lane configuration.
  • Level shifter for I2C interface (1.8V <--> 3.3V)
  • Completely new board name (DM1090?) to distinguish from other FFC versions with 20/26 pin FFC interfaces.

Pinout definition for 4-lane (RGB) and 2-lane (stereo left, stereo right) examples are below:

image

How to open up the 3D model files of these cameras?

Hi,

We are working on the mechanical design which has OAK-D as a component. However, we could not open the 3D model file or the mechanical file. We have tried several different software, even online tools, but still cannot open them. What is the exact software that we should use? Thanks.

Support IMX283

Start with the why:

For low-light color and long-range high-quality digital zoom applications, it is beneficial to have a large image sensor with large pixels. The IMX283 provides both, with 20MP and a 15.86 mm optical diagonal sensor size (quite large) with 2.4um square pixels.

Move to the how:

ArduCam is helping us to make IMX283 sensor boards that work with DepthAI. We will then be writing a driver for this module, and see how much tuning is required.

Move to the what:

Support the IMX283 on DepthAI.

OAK-FFC-4P: 4-Camera FFC USB3C DepthAI Variant

Start with the why:

For the 1099 SOM, we have the 1090FFC board, which allows folks to quickly prototype with various cameras (e.g. from ArduCam here), stereo baselines, filters (for hyperspectral systems), etc. quickly. We are currently lacking such a board for the 2099 SOM.

The 2099 SOM supports several additional things that the 1099 SOM does not, including:

  1. 4 cameras (instead of 3): 2x 4-lane, 2x 2-lane
  2. uSD Support

So we need a board that allows such prototyping, while also enabling the additional functionality of the 2099 SOM above.

On the 1090FFC board, we tried to make it as small as possible so that when folks prototype, their overall prototype is not huge (as we have found that most of our customers need to use even their prototypes in space-constrained environments). We have actually found that even though we tried to make it small, it is still borderline "too big" for many customers, or outright is too big.

So for the 2090FFC variant, it is key to try to keep it as small, or close to as small, as the 1090FFC.

As this 4-camera board may be used with a variety of cameras, including those with hardware frame-sync, there should be a system onboard for allowing up to all 4 cameras to be synchronized.

Move to the how:

Increase layers as necessary from the 1090FFC, and spend additional routing/placement time/effort to make the 2090FFC as compact as reasonably possible. And we can also change/remove connectors/etc. to do so.

Likely use slots for the mounting holes, like below, to maximize the usable board area while minimizing the actual size of the device:
image

Move to the what:

A compact, 4-camera-capable FFC board with onboard SD-Card and IMU with onboard hardware allowing all 4 cameras to be hardware synced (FSIN).

Support 150-degree FishEye OV9282 Camera

Start with the why:

As it stands now DepthAI is optimized for real-time spatial AI, so where objects are in physical space and attributes about them. So for this, narrower field of view is desirable.

But for applications where it is desirable to self-orient and self-locate (i.e. simultaneous location and mapping - SLAM, e.g. discussion in ROS discourse here), wider field of view is of interest. So having support for a wide field of view OV9282 camera module is of interest.

And it just so happened that Arducam reached out offering cooperation, and has a module that is ideal in terms of FOV:

  • 150 degrees DFOV (127 degrees HFOV)

OV9281-155-degFOV

Move to the how:

We are working with ArduCam to integrate this OV9282 fisheye version in with DepthAI.

The what:

Support ArduCam 155-degree FishEye OV9282 Camera

Android support?

I know that it might not be on the list now but Is OAK D able to be used with android devices ?
Or is there some plans to add android support for oakd?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.