Git Product home page Git Product logo
  • chrisneagu / ftc-skystone-dark-angels-romania-2020

    Github-Explorer, NOTICE This repository contains the public FTC SDK for the SKYSTONE (2019-2020) competition season. If you are looking for the current season's FTC SDK software, please visit the new and permanent home of the public FTC SDK: FtcRobotController repository Welcome! This GitHub repository contains the source code that is used to build an Android app to control a FIRST Tech Challenge competition robot. To use this SDK, download/clone the entire project to your local computer. Getting Started If you are new to robotics or new to the FIRST Tech Challenge, then you should consider reviewing the FTC Blocks Tutorial to get familiar with how to use the control system: FTC Blocks Online Tutorial Even if you are an advanced Java programmer, it is helpful to start with the FTC Blocks tutorial, and then migrate to the OnBot Java Tool or to Android Studio afterwards. Downloading the Project If you are an Android Studio programmer, there are several ways to download this repo. Note that if you use the Blocks or OnBot Java Tool to program your robot, then you do not need to download this repository. If you are a git user, you can clone the most current version of the repository: git clone https://github.com/FIRST-Tech-Challenge/SKYSTONE.git Or, if you prefer, you can use the "Download Zip" button available through the main repository page. Downloading the project as a .ZIP file will keep the size of the download manageable. You can also download the project folder (as a .zip or .tar.gz archive file) from the Downloads subsection of the Releases page for this repository. Once you have downloaded and uncompressed (if needed) your folder, you can use Android Studio to import the folder ("Import project (Eclipse ADT, Gradle, etc.)"). Getting Help User Documentation and Tutorials FIRST maintains online documentation with information and tutorials on how to use the FIRST Tech Challenge software and robot control system. You can access this documentation using the following link: SKYSTONE Online Documentation Note that the online documentation is an "evergreen" document that is constantly being updated and edited. It contains the most current information about the FIRST Tech Challenge software and control system. Javadoc Reference Material The Javadoc reference documentation for the FTC SDK is now available online. Click on the following link to view the FTC SDK Javadoc documentation as a live website: FTC Javadoc Documentation Documentation for the FTC SDK is also included with this repository. There is a subfolder called "doc" which contains several subfolders: The folder "apk" contains the .apk files for the FTC Driver Station and FTC Robot Controller apps. The folder "javadoc" contains the JavaDoc user documentation for the FTC SDK. Online User Forum For technical questions regarding the Control System or the FTC SDK, please visit the FTC Technology forum: FTC Technology Forum Release Information Version 5.5 (20200824-090813) Version 5.5 requires Android Studio 4.0 or later. New features Adds support for calling custom Java classes from Blocks OpModes (fixes SkyStone issue #161). Classes must be in the org.firstinspires.ftc.teamcode package. Methods must be public static and have no more than 21 parameters. Parameters declared as OpMode, LinearOpMode, Telemetry, and HardwareMap are supported and the argument is provided automatically, regardless of the order of the parameters. On the block, the sockets for those parameters are automatically filled in. Parameters declared as char or java.lang.Character will accept any block that returns text and will only use the first character in the text. Parameters declared as boolean or java.lang.Boolean will accept any block that returns boolean. Parameters declared as byte, java.lang.Byte, short, java.lang.Short, int, java.lang.Integer, long, or java.lang.Long, will accept any block that returns a number and will round that value to the nearest whole number. Parameters declared as float, java.lang.Float, double, java.lang.Double will accept any block that returns a number. Adds telemetry API method for setting display format Classic Monospace HTML (certain tags only) Adds blocks support for switching cameras. Adds Blocks support for TensorFlow Object Detection with a custom model. Adds support for uploading a custom TensorFlow Object Detection model in the Manage page, which is especially useful for Blocks and OnBotJava users. Shows new Control Hub blink codes when the WiFi band is switched using the Control Hub's button (only possible on Control Hub OS 1.1.2) Adds new warnings which can be disabled in the Advanced RC Settings Mismatched app versions warning Unnecessary 2.4 GHz WiFi usage warning REV Hub is running outdated firmware (older than version 1.8.2) Adds support for Sony PS4 gamepad, and reworks how gamepads work on the Driver Station Removes preference which sets gamepad type based on driver position. Replaced with menu which allows specifying type for gamepads with unknown VID and PID Attempts to auto-detect gamepad type based on USB VID and PID If gamepad VID and PID is not known, use type specified by user for that VID and PID If gamepad VID and PID is not known AND the user has not specified a type for that VID and PID, an educated guess is made about how to map the gamepad Driver Station will now attempt to automatically recover from a gamepad disconnecting, and re-assign it to the position it was assigned to when it dropped If only one gamepad is assigned and it drops: it can be recovered If two gamepads are assigned, and have different VID/PID signatures, and only one drops: it will be recovered If two gamepads are assigned, and have different VID/PID signatures, and BOTH drop: both will be recovered If two gamepads are assigned, and have the same VID/PID signatures, and only one drops: it will be recovered If two gamepads are assigned, and have the same VID/PID signatures, and BOTH drop: neither will be recovered, because of the ambiguity of the gamepads when they re-appear on the USB bus. There is currently one known edge case: if there are two gamepads with the same VID/PID signature plugged in, but only one is assigned, and they BOTH drop, it's a 50-50 chance of which one will be chosen for automatic recovery to the assigned position: it is determined by whichever one is re-enumerated first by the USB bus controller. Adds landscape user interface to Driver Station New feature: practice timer with audio cues New feature (Control Hub only): wireless network connection strength indicator (0-5 bars) New feature (Control Hub only): tapping on the ping/channel display will switch to an alternate display showing radio RX dBm and link speed (tap again to switch back) The layout will NOT autorotate. You can switch the layout from the Driver Station's settings menu. Breaking changes Removes support for Android versions 4.4 through 5.1 (KitKat and Lollipop). The minSdkVersion is now 23. Removes the deprecated LinearOpMode methods waitOneFullHardwareCycle() and waitForNextHardwareCycle() Enhancements Handles RS485 address of Control Hub automatically The Control Hub is automatically given a reserved address Existing configuration files will continue to work All addresses in the range of 1-10 are still available for Expansion Hubs The Control Hub light will now normally be solid green, without blinking to indicate the address The Control Hub will not be shown on the Expansion Hub Address Change settings page Improves REV Hub firmware updater The user can now choose between all available firmware update files Version 1.8.2 of the REV Hub firmware is bundled into the Robot Controller app. Text was added to clarify that Expansion Hubs can only be updated via USB. Firmware update speed was reduced to improve reliability Allows REV Hub firmware to be updated directly from the Manage webpage Improves log viewer on Robot Controller Horizontal scrolling support (no longer word wrapped) Supports pinch-to-zoom Uses a monospaced font Error messages are highlighted New color scheme Attempts to force-stop a runaway/stuck OpMode without restarting the entire app Not all types of runaway conditions are stoppable, but if the user code attempts to talk to hardware during the runaway, the system should be able to capture it. Makes various tweaks to the Self Inspect screen Renames "OS version" entry to "Android version" Renames "WiFi Direct Name" to "WiFi Name" Adds Control Hub OS version, when viewing the report of a Control Hub Hides the airplane mode entry, when viewing the report of a Control Hub Removes check for ZTE Speed Channel Changer Shows firmware version for all Expansion and Control Hubs Reworks network settings portion of Manage page All network settings are now applied with a single click The WiFi Direct channel of phone-based Robot Controllers can now be changed from the Manage page WiFi channels are filtered by band (2.4 vs 5 GHz) and whether they overlap with other channels The current WiFi channel is pre-selected on phone-based Robot Controllers, and Control Hubs running OS 1.1.2 or later. On Control Hubs running OS 1.1.2 or later, you can choose to have the system automatically select a channel on the 5 GHz band Improves OnBotJava New light and dark themes replace the old themes (chaos, github, chrome,...) the new default theme is light and will be used when you first update to this version OnBotJava now has a tabbed editor Read-only offline mode Improves function of "exit" menu item on Robot Controller and Driver Station Now guaranteed to be fully stopped and unloaded from memory Shows a warning message if a LinearOpMode exists prematurely due to failure to monitor for the start condition Improves error message shown when the Driver Station and Robot Controller are incompatible with each other Driver Station OpMode Control Panel now disabled while a Restart Robot is in progress Disables advanced settings related to WiFi direct when the Robot Controller is a Control Hub. Tint phone battery icons on Driver Station when low/critical. Uses names "Control Hub Portal" and "Control Hub" (when appropriate) in new configuration files Improve I2C read performance Very large improvement on Control Hub; up to ~2x faster with small (e.g. 6 byte) reads Not as apparent on Expansion Hubs connected to a phone Update/refresh build infrastructure Update to 'androidx' support library from 'com.android.support:appcompat', which is end-of-life Update targetSdkVersion and compileSdkVersion to 28 Update Android Studio's Android plugin to latest Fix reported build timestamp in 'About' screen Add sample illustrating manual webcam use: ConceptWebcam Bug fixes Fixes SkyStone issue #248 Fixes SkyStone issue #232 and modifies bulk caching semantics to allow for cache-preserving MANUAL/AUTO transitions. Improves performance when REV 2M distance sensor is unplugged Improves readability of Toast messages on certain devices Allows a Driver Station to connect to a Robot Controller after another has disconnected Improves generation of fake serial numbers for UVC cameras which do not provide a real serial number Previously some devices would assign such cameras a serial of 0:0 and fail to open and start streaming Fixes ftc_app issue #638. Fixes a slew of bugs with the Vuforia camera monitor including: Fixes bug where preview could be displayed with a wonky aspect ratio Fixes bug where preview could be cut off in landscape Fixes bug where preview got totally messed up when rotating phone Fixes bug where crosshair could drift off target when using webcams Fixes issue in UVC driver on some devices (ftc_app 681) if streaming was started/stopped multiple times in a row Issue manifested as kernel panic on devices which do not have this kernel patch. On affected devices which do have the patch, the issue was manifest as simply a failure to start streaming. The Tech Team believes that the root cause of the issue is a bug in the Linux kernel XHCI driver. A workaround was implemented in the SDK UVC driver. Fixes bug in UVC driver where often half the frames from the camera would be dropped (e.g. only 15FPS delivered during a streaming session configured for 30FPS). Fixes issue where TensorFlow Object Detection would show results whose confidence was lower than the minimum confidence parameter. Fixes a potential exploitation issue of CVE-2019-11358 in OnBotJava Fixes changing the address of an Expansion Hub with additional Expansion Hubs connected to it Preserves the Control Hub's network connection when "Restart Robot" is selected Fixes issue where device scans would fail while the Robot was restarting Fix RenderScript usage Use androidx.renderscript variant: increased compatibility Use RenderScript in Java mode, not native: simplifies build Fixes webcam-frame-to-bitmap conversion problem: alpha channel wasn't being initialized, only R, G, & B Fixes possible arithmetic overflow in Deadline Fixes deadlock in Vuforia webcam support which could cause 5-second delays when stopping OpMode Version 5.4 (20200108-101156) Fixes SkyStone issue #88 Adds an inspection item that notes when a robot controller (Control Hub) is using the factory default password. Fixes SkyStone issue #61 Fixes SkyStone issue #142 Fixes ftc_app issue #417 by adding more current and voltage monitoring capabilities for REV Hubs. Fixes a crash sometimes caused by OnBotJava activity Improves OnBotJava autosave functionality ftc_app #738 Fixes system responsiveness issue when an Expansion Hub is disconnected Fixes issue where IMU initialization could prevent Op Modes from stopping Fixes issue where AndroidTextToSpeech.speak() would fail if it was called too early Adds telemetry.speak() methods and blocks, which cause the Driver Station (if also updated) to speak text Adds and improves Expansion Hub-related warnings Improves Expansion Hub low battery warning Displays the warning immediately after the hub reports it Specifies whether the condition is current or occurred temporarily during an OpMode run Displays which hubs reported low battery Displays warning when hub loses and regains power during an OpMode run Fixes the hub's LED pattern after this condition Displays warning when Expansion Hub is not responding to commands Specifies whether the condition is current or occurred temporarily during an OpMode run Clarifies warning when Expansion Hub is not present at startup Specifies that this condition requires a Robot Restart before the hub can be used. The hub light will now accurately reflect this state Improves logging and reduces log spam during these conditions Syncs the Control Hub time and timezone to a connected web browser programming the robot, if a Driver Station is not available. Adds bulk read functionality for REV Hubs A bulk caching mode must be set at the Hub level with LynxModule#setBulkCachingMode(). This applies to all relevant SDK hardware classes that reference that Hub. The following following Hub bulk caching modes are available: BulkCachingMode.OFF (default): All hardware calls operate as usual. Bulk data can read through LynxModule#getBulkData() and processed manually. BulkCachingMode.AUTO: Applicable hardware calls are served from a bulk read cache that is cleared/refreshed automatically to ensure identical commands don't hit the same cache. The cache can also be cleared manually with LynxModule#clearBulkCache(), although this is not recommended. (advanced users) BulkCachingMode.MANUAL: Same as BulkCachingMode.AUTO except the cache is never cleared automatically. To avoid getting stale data, the cache must be manually cleared at the beginning of each loop body or as the user deems appropriate. Removes PIDF Annotation values added in Rev 5.3 (to AndyMark, goBILDA and TETRIX motor configurations). The new motor types will still be available but their Default control behavior will revert back to Rev 5.2 Adds new ConceptMotorBulkRead sample Opmode to demonstrate and compare Motor Bulk-Read modes for reducing I/O latencies. Version 5.3 (20191004-112306) Fixes external USB/UVC webcam support Makes various bugfixes and improvements to Blocks page, including but not limited to: Many visual tweaks Browser zoom and window resize behave better Resizing the Java preview pane works better and more consistently across browsers The Java preview pane consistently gets scrollbars when needed The Java preview pane is hidden by default on phones Internet Explorer 11 should work Large dropdown lists display properly on lower res screens Disabled buttons are now visually identifiable as disabled A warning is shown if a user selects a TFOD sample, but their device is not compatible Warning messages in a Blocks op mode are now visible by default. Adds goBILDA 5201 and 5202 motors to Robot Configurator Adds PIDF Annotation values to AndyMark, goBILDA and TETRIX motor configurations. This has the effect of causing the RUN_USING_ENCODERS and RUN_TO_POSITION modes to use PIDF vs PID closed loop control on these motors. This should provide more responsive, yet stable, speed control. PIDF adds Feedforward control to the basic PID control loop. Feedforward is useful when controlling a motor's speed because it "anticipates" how much the control voltage must change to achieve a new speed set-point, rather than requiring the integrated error to change sufficiently. The PIDF values were chosen to provide responsive, yet stable, speed control on a lightly loaded motor. The more heavily a motor is loaded (drag or friction), the more noticable the PIDF improvement will be. Fixes startup crash on Android 10 Fixes ftc_app issue #712 (thanks to FROGbots-4634) Fixes ftc_app issue #542 Allows "A" and lowercase letters when naming device through RC and DS apps. Version 5.2 (20190905-083277) Fixes extra-wide margins on settings activities, and placement of the new configuration button Adds Skystone Vuforia image target data. Includes sample Skystone Vuforia Navigation op modes (Java). Includes sample Skystone Vuforia Navigation op modes (Blocks). Adds TensorFlow inference model (.tflite) for Skystone game elements. Includes sample Skystone TensorFlow op modes (Java). Includes sample Skystone TensorFlow op modes (Blocks). Removes older (season-specific) sample op modes. Includes 64-bit support (to comply with Google Play requirements). Protects against Stuck OpModes when a Restart Robot is requested. (Thanks to FROGbots-4634) (ftc_app issue #709) Blocks related changes: Fixes bug with blocks generated code when hardware device name is a java or javascript reserved word. Shows generated java code for blocks, even when hardware items are missing from the active configuration. Displays warning icon when outdated Vuforia and TensorFlow blocks are used (SkyStone issue #27) Version 5.1 (20190820-222104) Defines default PIDF parameters for the following motors: REV Core Hex Motor REV 20:1 HD Hex Motor REV 40:1 HD Hex Motor Adds back button when running on a device without a system back button (such as a Control Hub) Allows a REV Control Hub to update the firmware on a REV Expansion Hub via USB Fixes SkyStone issue #9 Fixes ftc_app issue #715 Prevents extra DS User clicks by filtering based on current state. Prevents incorrect DS UI state changes when receiving new OpMode list from RC Adds support for REV Color Sensor V3 Adds a manual-refresh DS Camera Stream for remotely viewing RC camera frames. To show the stream on the DS, initialize but do not run a stream-enabled opmode, select the Camera Stream option in the DS menu, and tap the image to refresh. This feature is automatically enabled when using Vuforia or TFOD—no additional RC configuration is required for typical use cases. To hide the stream, select the same menu item again. Note that gamepads are disabled and the selected opmode cannot be started while the stream is open as a safety precaution. To use custom streams, consult the API docs for CameraStreamServer#setSource and CameraStreamSource. Adds many Star Wars sounds to RobotController resources. Added SKYSTONE Sounds Chooser Sample Program. Switches out startup, connect chimes, and error/warning sounds for Star Wars sounds Updates OnBot Java to use a WebSocket for communication with the robot The OnBot Java page no longer has to do a full refresh when a user switches from editing one file to another Known issues: Camera Stream The Vuforia camera stream inherits the issues present in the phone preview (namely ftc_app issue #574). This problem does not affect the TFOD camera stream even though it receives frames from Vuforia. The orientation of the stream frames may not always match the phone preview. For now, these frames may be rotated manually via a custom CameraStreamSource if desired. OnBotJava Browser back button may not always work correctly It's possible for a build to be queued, but not started. The OnBot Java build console will display a warning if this occurs. A user might not realize they are editing a different file if the user inadvertently switches from one file to another since this switch is now seamless. The name of the currently open file is displayed in the browser tab. Version 5.0 (built on 19.06.14) Support for the REV Robotics Control Hub. Adds a Java preview pane to the Blocks editor. Adds a new offline export feature to the Blocks editor. Display wifi channel in Network circle on Driver Station. Adds calibration for Logitech C270 Updates build tooling and target SDK. Compliance with Google's permissions infrastructure (Required after build tooling update). Keep Alives to mitigate the Motorola wifi scanning problem. Telemetry substitute no longer necessary. Improves Vuforia error reporting. Fixes ftctechnh/ftc_app issues 621, 713. Miscellaneous bug fixes and improvements. Version 4.3 (built on 18.10.31) Includes missing TensorFlow-related libraries and files. Version 4.2 (built on 18.10.30) Includes fix to avoid deadlock situation with WatchdogMonitor which could result in USB communication errors. Comm error appeared to require that user disconnect USB cable and restart the Robot Controller app to recover. robotControllerLog.txt would have error messages that included the words "E RobotCore: lynx xmit lock: #### abandoning lock:" Includes fix to correctly list the parent module address for a REV Robotics Expansion Hub in a configuration (.xml) file. Bug in versions 4.0 and 4.1 would incorrect list the address module for a parent REV Robotics device as "1". If the parent module had a higher address value than the daisy-chained module, then this bug would prevent the Robot Controller from communicating with the downstream Expansion Hub. Added requirement for ACCESS_COARSE_LOCATION to allow a Driver Station running Android Oreo to scan for Wi-Fi Direct devices. Added google() repo to build.gradle because aapt2 must be downloaded from the google() repository beginning with version 3.2 of the Android Gradle Plugin. Important Note: Android Studio users will need to be connected to the Internet the first time build the ftc_app project. Internet connectivity is required for the first build so the appropriate files can be downloaded from the Google repository. Users should not need to be connected to the Internet for subsequent builds. This should also fix buid issue where Android Studio would complain that it "Could not find com.android.tools.lint:lint-gradle:26.1.4" (or similar). Added support for REV Spark Mini motor controller as part of the configuration menu for a servo/PWM port on the REV Expansion Hub. Provide examples for playing audio files in an Op Mode. Block Development Tool Changes Includes a fix for a problem with the Velocity blocks that were reported in the FTC Technology forum (Blocks Programming subforum). Change the "Save completed successfully." message to a white color so it will contrast with a green background. Fixed the "Download image" feature so it will work if there are text blocks in the op mode. Introduce support for Google's TensorFlow Lite technology for object detetion for 2018-2019 game. TensorFlow lite can recognize Gold Mineral and Silver Mineral from 2018-2019 game. Example Java and Block op modes are included to show how to determine the relative position of the gold block (left, center, right). Version 4.1 (released on 18.09.24) Changes include: Fix to prevent crash when deprecated configuration annotations are used. Change to allow FTC Robot Controller APK to be auto-updated using FIRST Global Control Hub update scripts. Removed samples for non supported / non legal hardware. Improvements to Telemetry.addData block with "text" socket. Updated Blocks sample op mode list to include Rover Ruckus Vuforia example. Update SDK library version number. Version 4.0 (released on 18.09.12) Changes include: Initial support for UVC compatible cameras If UVC camera has a unique serial number, RC will detect and enumerate by serial number. If UVC camera lacks a unique serial number, RC will only support one camera of that type connected. Calibration settings for a few cameras are included (see TeamCode/src/main/res/xml/teamwebcamcalibrations.xml for details). User can upload calibration files from Program and Manage web interface. UVC cameras seem to draw a fair amount of electrical current from the USB bus. This does not appear to present any problems for the REV Robotics Control Hub. This does seem to create stability problems when using some cameras with an Android phone-based Robot Controller. FTC Tech Team is investigating options to mitigate this issue with the phone-based Robot Controllers. Updated sample Vuforia Navigation and VuMark Op Modes to demonstrate how to use an internal phone-based camera and an external UVC webcam. Support for improved motor control. REV Robotics Expansion Hub firmware 1.8 and greater will support a feed forward mechanism for closed loop motor control. FTC SDK has been modified to support PIDF coefficients (proportional, integral, derivative, and feed forward). FTC Blocks development tool modified to include PIDF programming blocks. Deprecated older PID-related methods and variables. REV's 1.8.x PIDF-related changes provide a more linear and accurate way to control a motor. Wireless Added 5GHz support for wireless channel changing for those devices that support it. Tested with Moto G5 and E4 phones. Also tested with other (currently non-approved) phones such as Samsung Galaxy S8. Improved Expansion Hub firmware update support in Robot Controller app Changes to make the system more robust during the firmware update process (when performed through Robot Controller app). User no longer has to disconnect a downstream daisy-chained Expansion Hub when updating an Expansion Hub's firmware. If user is updating an Expansion Hub's firmware through a USB connection, he/she does not have to disconnect RS485 connection to other Expansion Hubs. The user still must use a USB connection to update an Expansion Hub's firmware. The user cannot update the Expansion Hub firmware for a downstream device that is daisy chained through an RS485 connection. If an Expansion Hub accidentally gets "bricked" the Robot Controller app is now more likely to recognize the Hub when it scans the USB bus. Robot Controller app should be able to detect an Expansion Hub, even if it accidentally was bricked in a previous update attempt. Robot Controller app should be able to install the firmware onto the Hub, even if if accidentally was bricked in a previous update attempt. Resiliency FTC software can detect and enable an FTDI reset feature that is available with REV Robotics v1.8 Expansion Hub firmware and greater. When enabled, the Expansion Hub can detect if it hasn't communicated with the Robot Controller over the FTDI (USB) connection. If the Hub hasn't heard from the Robot Controller in a while, it will reset the FTDI connection. This action helps system recover from some ESD-induced disruptions. Various fixes to improve reliability of FTC software. Blocks Fixed errors with string and list indices in blocks export to java. Support for USB connected UVC webcams. Refactored optimized Blocks Vuforia code to support Rover Ruckus image targets. Added programming blocks to support PIDF (proportional, integral, derivative and feed forward) motor control. Added formatting options (under Telemetry and Miscellaneous categories) so user can set how many decimal places to display a numerical value. Support to play audio files (which are uploaded through Blocks web interface) on Driver Station in addition to the Robot Controller. Fixed bug with Download Image of Blocks feature. Support for REV Robotics Blinkin LED Controller. Support for REV Robotics 2m Distance Sensor. Added support for a REV Touch Sensor (no longer have to configure as a generic digital device). Added blocks for DcMotorEx methods. These are enhanced methods that you can use when supported by the motor controller hardware. The REV Robotics Expansion Hub supports these enhanced methods. Enhanced methods include methods to get/set motor velocity (in encoder pulses per second), get/set PIDF coefficients, etc.. Modest Improvements in Logging Decrease frequency of battery checker voltage statements. Removed non-FTC related log statements (wherever possible). Introduced a "Match Logging" feature. Under "Settings" a user can enable/disable this feature (it's disabled by default). If enabled, user provides a "Match Number" through the Driver Station user interface (top of the screen). The Match Number is used to create a log file specifically with log statements from that particular Op Mode run. Match log files are stored in /sdcard/FIRST/matlogs on the Robot Controller. Once an op mode run is complete, the Match Number is cleared. This is a convenient way to create a separate match log with statements only related to a specific op mode run. New Devices Support for REV Robotics Blinkin LED Controller. Support for REV Robotics 2m Distance Sensor. Added configuration option for REV 20:1 HD Hex Motor. Added support for a REV Touch Sensor (no longer have to configure as a generic digital device). Miscellaneous Fixed some errors in the definitions for acceleration and velocity in our javadoc documentation. Added ability to play audio files on Driver Station When user is configuring an Expansion Hub, the LED on the Expansion Hub will change blink pattern (purple-cyan) to indicate which Hub is currently being configured. Renamed I2cSensorType to I2cDeviceType. Added an external sample Op Mode that demonstrates localization using 2018-2019 (Rover Ruckus presented by QualComm) Vuforia targets. Added an external sample Op Mode that demonstrates how to use the REV Robotics 2m Laser Distance Sensor. Added an external sample Op Mode that demonstrates how to use the REV Robotics Blinkin LED Controller. Re-categorized external Java sample Op Modes to "TeleOp" instead of "Autonomous". Known issues: Initial support for UVC compatible cameras UVC cameras seem to draw significant amount of current from the USB bus. This does not appear to present any problems for the REV Robotics Control Hub. This does seem to create stability problems when using some cameras with an Android phone-based Robot Controller. FTC Tech Team is investigating options to mitigate this issue with the phone-based Robot Controllers. There might be a possible deadlock which causes the RC to become unresponsive when using a UVC webcam with a Nougat Android Robot Controller. Wireless When user selects a wireless channel, this channel does not necessarily persist if the phone is power cycled. Tech Team is hoping to eventually address this issue in a future release. Issue has been present since apps were introduced (i.e., it is not new with the v4.0 release). Wireless channel is not currently displayed for WiFi Direct connections. Miscellaneous The blink indication feature that shows which Expansion Hub is currently being configured does not work for a newly created configuration file. User has to first save a newly created configuration file and then close and re-edit the file in order for blink indicator to work. Version 3.6 (built on 17.12.18) Changes include: Blocks Changes Uses updated Google Blockly software to allow users to edit their op modes on Apple iOS devices (including iPad and iPhone). Improvement in Blocks tool to handle corrupt op mode files. Autonomous op modes should no longer get switched back to tele-op after re-opening them to be edited. The system can now detect type mismatches during runtime and alert the user with a message on the Driver Station. Updated javadoc documentation for setPower() method to reflect correct range of values (-1 to +1). Modified VuforiaLocalizerImpl to allow for user rendering of frames Added a user-overrideable onRenderFrame() method which gets called by the class's renderFrame() method. Version 3.5 (built on 17.10.30) Changes with version 3.5 include: Introduced a fix to prevent random op mode stops, which can occur after the Robot Controller app has been paused and then resumed (for example, when a user temporarily turns off the display of the Robot Controller phone, and then turns the screen back on). Introduced a fix to prevent random op mode stops, which were previously caused by random peer disconnect events on the Driver Station. Fixes issue where log files would be closed on pause of the RC or DS, but not re-opened upon resume. Fixes issue with battery handler (voltage) start/stop race. Fixes issue where Android Studio generated op modes would disappear from available list in certain situations. Fixes problem where OnBot Java would not build on REV Robotics Control Hub. Fixes problem where OnBot Java would not build if the date and time on the Robot Controller device was "rewound" (set to an earlier date/time). Improved error message on OnBot Java that occurs when renaming a file fails. Removed unneeded resources from android.jar binaries used by OnBot Java to reduce final size of Robot Controller app. Added MR_ANALOG_TOUCH_SENSOR block to Blocks Programming Tool. Version 3.4 (built on 17.09.06) Changes with version 3.4 include: Added telemetry.update() statement for BlankLinearOpMode template. Renamed sample Block op modes to be more consistent with Java samples. Added some additional sample Block op modes. Reworded OnBot Java readme slightly. Version 3.3 (built on 17.09.04) This version of the software includes improves for the FTC Blocks Programming Tool and the OnBot Java Programming Tool. Changes with verion 3.3 include: Android Studio ftc_app project has been updated to use Gradle Plugin 2.3.3. Android Studio ftc_app project is already using gradle 3.5 distribution. Robot Controller log has been renamed to /sdcard/RobotControllerLog.txt (note that this change was actually introduced w/ v3.2). Improvements in I2C reliability. Optimized I2C read for REV Expansion Hub, with v1.7 firmware or greater. Updated all external/samples (available through OnBot and in Android project folder). Vuforia Added support for VuMarks that will be used for the 2017-2018 season game. Blocks Update to latest Google Blockly release. Sample op modes can be selected as a template when creating new op mode. Fixed bug where the blocks would disappear temporarily when mouse button is held down. Added blocks for Range.clip and Range.scale. User can now disable/enable Block op modes. Fix to prevent occasional Blocks deadlock. OnBot Java Significant improvements with autocomplete function for OnBot Java editor. Sample op modes can be selected as a template when creating new op mode. Fixes and changes to complete hardware setup feature. Updated (and more useful) onBot welcome message. Known issues: Android Studio After updating to the new v3.3 Android Studio project folder, if you get error messages indicating "InvalidVirtualFileAccessException" then you might need to do a File->Invalidate Caches / Restart to clear the error. OnBot Java Sometimes when you push the build button to build all op modes, the RC returns an error message that the build failed. If you press the build button a second time, the build typically suceeds. Version 3.2 (built on 17.08.02) This version of the software introduces the "OnBot Java" Development Tool. Similar to the FTC Blocks Development Tool, the FTC OnBot Java Development Tool allows a user to create, edit and build op modes dynamically using only a Javascript-enabled web browser. The OnBot Java Development Tool is an integrated development environment (IDE) that is served up by the Robot Controller. Op modes are created and edited using a Javascript-enabled browser (Google Chromse is recommended). Op modes are saved on the Robot Controller Android device directly. The OnBot Java Development Tool provides a Java programming environment that does NOT need Android Studio. Changes with version 3.2 include: Enhanced web-based development tools Introduction of OnBot Java Development Tool. Web-based programming and management features are "always on" (user no longer needs to put Robot Controller into programming mode). Web-based management interface (where user can change Robot Controller name and also easily download Robot Controller log file). OnBot Java, Blocks and Management features available from web based interface. Blocks Programming Development Tool: Changed "LynxI2cColorRangeSensor" block to "REV Color/range sensor" block. Fixed tooltip for ColorSensor.isLightOn block. Added blocks for ColorSensor.getNormalizedColors and LynxI2cColorRangeSensor.getNormalizedColors. Added example op modes for digital touch sensor and REV Robotics Color Distance sensor. User selectable color themes. Includes many minor enhancements and fixes (too numerous to list). Known issues: Auto complete function is incomplete and does not support the following (for now): Access via this keyword Access via super keyword Members of the super cloass, not overridden by the class Any methods provided in the current class Inner classes Can't handle casted objects Any objects coming from an parenthetically enclosed expression Version 3.10 (built on 17.05.09) This version of the software provides support for the REV Robotics Expansion Hub. This version also includes improvements in the USB communication layer in an effort to enhance system resiliency. If you were using a 2.x version of the software previously, updating to version 3.1 requires that you also update your Driver Station software in addition to updating the Robot Controller software. Also note that in version 3.10 software, the setMaxSpeed and getMaxSpeed methods are no longer available (not deprecated, they have been removed from the SDK). Also note that the the new 3.x software incorporates motor profiles that a user can select as he/she configures the robot. Changes include: Blocks changes Added VuforiaTrackableDefaultListener.getPose and Vuforia.trackPose blocks. Added optimized blocks support for Vuforia extended tracking. Added atan2 block to the math category. Added useCompetitionFieldTargetLocations parameter to Vuforia.initialize block. If set to false, the target locations are placed at (0,0,0) with target orientation as specified in https://github.com/gearsincorg/FTCVuforiaDemo/blob/master/Robot_Navigation.java tutorial op mode. Incorporates additional improvements to USB comm layer to improve system resiliency (to recover from a greater number of communication disruptions). Additional Notes Regarding Version 3.00 (built on 17.04.13) In addition to the release changes listed below (see section labeled "Version 3.00 (built on 17.04.013)"), version 3.00 has the following important changes: Version 3.00 software uses a new version of the FTC Robocol (robot protocol). If you upgrade to v3.0 on the Robot Controller and/or Android Studio side, you must also upgrade the Driver Station software to match the new Robocol. Version 3.00 software removes the setMaxSpeed and getMaxSpeed methods from the DcMotor class. If you have an op mode that formerly used these methods, you will need to remove the references/calls to these methods. Instead, v3.0 provides the max speed information through the use of motor profiles that are selected by the user during robot configuration. Version 3.00 software currently does not have a mechanism to disable extra i2c sensors. We hope to re-introduce this function with a release in the near future. Version 3.00 (built on 17.04.13) *** Use this version of the software at YOUR OWN RISK!!! *** This software is being released as an "alpha" version. Use this version at your own risk! This pre-release software contains SIGNIFICANT changes, including changes to the Wi-Fi Direct pairing mechanism, rewrites of the I2C sensor classes, changes to the USB/FTDI layer, and the introduction of support for the REV Robotics Expansion Hub and the REV Robotics color-range-light sensor. These changes were implemented to improve the reliability and resiliency of the FTC control system. Please note, however, that version 3.00 is considered "alpha" code. This code is being released so that the FIRST community will have an opportunity to test the new REV Expansion Hub electronics module when it becomes available in May. The developers do not recommend using this code for critical applications (i.e., competition use). *** Use this version of the software at YOUR OWN RISK!!! *** Changes include: Major rework of sensor-related infrastructure. Includes rewriting sensor classes to implement synchronous I2C communication. Fix to reset Autonomous timer back to 30 seconds. Implementation of specific motor profiles for approved 12V motors (includes Tetrix, AndyMark, Matrix and REV models). Modest improvements to enhance Wi-Fi P2P pairing. Fixes telemetry log addition race. Publishes all the sources (not just a select few). Includes Block programming improvements Addition of optimized Vuforia blocks. Auto scrollbar to projects and sounds pages. Fixed blocks paste bug. Blocks execute after while-opModeIsActive loop (to allow for cleanup before exiting op mode). Added gyro integratedZValue block. Fixes bug with projects page for Firefox browser. Added IsSpeaking block to AndroidTextToSpeech. Implements support for the REV Robotics Expansion Hub Implements support for integral REV IMU (physically installed on I2C bus 0, uses same Bosch BNO055 9 axis absolute orientation sensor as Adafruit 9DOF abs orientation sensor). - Implements support for REV color/range/light sensor. Provides support to update Expansion Hub firmware through FTC SDK. Detects REV firmware version and records in log file. Includes support for REV Control Hub (note that the REV Control Hub is not yet approved for FTC use). Implements FTC Blocks programming support for REV Expansion Hub and sensor hardware. Detects and alerts when I2C device disconnect. Version 2.62 (built on 17.01.07) Added null pointer check before calling modeToByte() in finishModeSwitchIfNecessary method for ModernRoboticsUsbDcMotorController class. Changes to enhance Modern Robotics USB protocol robustness. Version 2.61 (released on 16.12.19) Blocks Programming mode changes: Fix to correct issue when an exception was thrown because an OpticalDistanceSensor object appears twice in the hardware map (the second time as a LightSensor). Version 2.6 (released on 16.12.16) Fixes for Gyro class: Improve (decrease) sensor refresh latency. fix isCalibrating issues. Blocks Programming mode changes: Blocks now ignores a device in the configuration xml if the name is empty. Other devices work in configuration work fine. Version 2.5 (internal release on released on 16.12.13) Blocks Programming mode changes: Added blocks support for AdafruitBNO055IMU. Added Download Op Mode button to FtcBocks.html. Added support for copying blocks in one OpMode and pasting them in an other OpMode. The clipboard content is stored on the phone, so the programming mode server must be running. Modified Utilities section of the toolbox. In Programming Mode, display information about the active connections. Fixed paste location when workspace has been scrolled. Added blocks support for the android Accelerometer. Fixed issue where Blocks Upload Op Mode truncated name at first dot. Added blocks support for Android SoundPool. Added type safety to blocks for Acceleration. Added type safety to blocks for AdafruitBNO055IMU.Parameters. Added type safety to blocks for AnalogInput. Added type safety to blocks for AngularVelocity. Added type safety to blocks for Color. Added type safety to blocks for ColorSensor. Added type safety to blocks for CompassSensor. Added type safety to blocks for CRServo. Added type safety to blocks for DigitalChannel. Added type safety to blocks for ElapsedTime. Added type safety to blocks for Gamepad. Added type safety to blocks for GyroSensor. Added type safety to blocks for IrSeekerSensor. Added type safety to blocks for LED. Added type safety to blocks for LightSensor. Added type safety to blocks for LinearOpMode. Added type safety to blocks for MagneticFlux. Added type safety to blocks for MatrixF. Added type safety to blocks for MrI2cCompassSensor. Added type safety to blocks for MrI2cRangeSensor. Added type safety to blocks for OpticalDistanceSensor. Added type safety to blocks for Orientation. Added type safety to blocks for Position. Added type safety to blocks for Quaternion. Added type safety to blocks for Servo. Added type safety to blocks for ServoController. Added type safety to blocks for Telemetry. Added type safety to blocks for Temperature. Added type safety to blocks for TouchSensor. Added type safety to blocks for UltrasonicSensor. Added type safety to blocks for VectorF. Added type safety to blocks for Velocity. Added type safety to blocks for VoltageSensor. Added type safety to blocks for VuforiaLocalizer.Parameters. Added type safety to blocks for VuforiaTrackable. Added type safety to blocks for VuforiaTrackables. Added type safety to blocks for enums in AdafruitBNO055IMU.Parameters. Added type safety to blocks for AndroidAccelerometer, AndroidGyroscope, AndroidOrientation, and AndroidTextToSpeech. Version 2.4 (released on 16.11.13) Fix to avoid crashing for nonexistent resources. Blocks Programming mode changes: Added blocks to support OpenGLMatrix, MatrixF, and VectorF. Added blocks to support AngleUnit, AxesOrder, AxesReference, CameraDirection, CameraMonitorFeedback, DistanceUnit, and TempUnit. Added blocks to support Acceleration. Added blocks to support LinearOpMode.getRuntime. Added blocks to support MagneticFlux and Position. Fixed typos. Made blocks for ElapsedTime more consistent with other objects. Added blocks to support Quaternion, Velocity, Orientation, AngularVelocity. Added blocks to support VuforiaTrackables, VuforiaTrackable, VuforiaLocalizer, VuforiaTrackableDefaultListener. Fixed a few blocks. Added type checking to new blocks. Updated to latest blockly. Added default variable blocks to navigation and matrix blocks. Fixed toolbox entry for openGLMatrix_rotation_withAxesArgs. When user downloads Blocks-generated op mode, only the .blk file is downloaded. When user uploads Blocks-generated op mode (.blk file), Javascript code is auto generated. Added DbgLog support. Added logging when a blocks file is read/written. Fixed bug to properly render blocks even if missing devices from configuration file. Added support for additional characters (not just alphanumeric) for the block file names (for download and upload). Added support for OpMode flavor (“Autonomous” or “TeleOp”) and group. Changes to Samples to prevent tutorial issues. Incorporated suggested changes from public pull 216 (“Replace .. paths”). Remove Servo Glitches when robot stopped. if user hits “Cancels” when editing a configuration file, clears the unsaved changes and reverts to original unmodified configuration. Added log info to help diagnose why the Robot Controller app was terminated (for example, by watch dog function). Added ability to transfer log from the controller. Fixed inconsistency for AngularVelocity Limit unbounded growth of data for telemetry. If user does not call telemetry.update() for LinearOpMode in a timely manner, data added for telemetry might get lost if size limit is exceeded. Version 2.35 (released on 16.10.06) Blockly programming mode - Removed unnecesary idle() call from blocks for new project. Version 2.30 (released on 16.10.05) Blockly programming mode: Mechanism added to save Blockly op modes from Programming Mode Server onto local device To avoid clutter, blocks are displayed in categorized folders Added support for DigitalChannel Added support for ModernRoboticsI2cCompassSensor Added support for ModernRoboticsI2cRangeSensor Added support for VoltageSensor Added support for AnalogInput Added support for AnalogOutput Fix for CompassSensor setMode block Vuforia Fix deadlock / make camera data available while Vuforia is running. Update to Vuforia 6.0.117 (recommended by Vuforia and Google to close security loophole). Fix for autonomous 30 second timer bug (where timer was in effect, even though it appeared to have timed out). opModeIsActive changes to allow cleanup after op mode is stopped (with enforced 2 second safety timeout). Fix to avoid reading i2c twice. Updated sample Op Modes. Improved logging and fixed intermittent freezing. Added digital I/O sample. Cleaned up device names in sample op modes to be consistent with Pushbot guide. Fix to allow use of IrSeekerSensorV3. Version 2.20 (released on 16.09.08) Support for Modern Robotics Compass Sensor. Support for Modern Robotics Range Sensor. Revise device names for Pushbot templates to match the names used in Pushbot guide. Fixed bug so that IrSeekerSensorV3 device is accessible as IrSeekerSensor in hardwareMap. Modified computer vision code to require an individual Vuforia license (per legal requirement from PTC). Minor fixes. Blockly enhancements: Support for Voltage Sensor. Support for Analog Input. Support for Analog Output. Support for Light Sensor. Support for Servo Controller. Version 2.10 (released on 16.09.03) Support for Adafruit IMU. Improvements to ModernRoboticsI2cGyro class Block on reset of z axis. isCalibrating() returns true while gyro is calibration. Updated sample gyro program. Blockly enhancements support for android.graphics.Color. added support for ElapsedTime. improved look and legibility of blocks. support for compass sensor. support for ultrasonic sensor. support for IrSeeker. support for LED. support for color sensor. support for CRServo prompt user to configure robot before using programming mode. Provides ability to disable audio cues. various bug fixes and improvements. Version 2.00 (released on 16.08.19) This is the new release for the upcoming 2016-2017 FIRST Tech Challenge Season. Channel change is enabled in the FTC Robot Controller app for Moto G 2nd and 3rd Gen phones. Users can now use annotations to register/disable their Op Modes. Changes in the Android SDK, JDK and build tool requirements (minsdk=19, java 1.7, build tools 23.0.3). Standardized units in analog input. Cleaned up code for existing analog sensor classes. setChannelMode and getChannelMode were REMOVED from the DcMotorController class. This is important - we no longer set the motor modes through the motor controller. setMode and getMode were added to the DcMotor class. ContinuousRotationServo class has been added to the FTC SDK. Range.clip() method has been overloaded so it can support this operation for int, short and byte integers. Some changes have been made (new methods added) on how a user can access items from the hardware map. Users can now set the zero power behavior for a DC motor so that the motor will brake or float when power is zero. Prototype Blockly Programming Mode has been added to FTC Robot Controller. Users can place the Robot Controller into this mode, and then use a device (such as a laptop) that has a Javascript enabled browser to write Blockly-based Op Modes directly onto the Robot Controller. Users can now configure the robot remotely through the FTC Driver Station app. Android Studio project supports Android Studio 2.1.x and compile SDK Version 23 (Marshmallow). Vuforia Computer Vision SDK integrated into FTC SDK. Users can use sample vision targets to get localization information on a standard FTC field. Project structure has been reorganized so that there is now a TeamCode package that users can use to place their local/custom Op Modes into this package. Inspection function has been integrated into the FTC Robot Controller and Driver Station Apps (Thanks Team HazMat… 9277 & 10650!). Audio cues have been incorporated into FTC SDK. Swap mechanism added to FTC Robot Controller configuration activity. For example, if you have two motor controllers on a robot, and you misidentified them in your configuration file, you can use the Swap button to swap the devices within the configuration file (so you do not have to manually re-enter in the configuration info for the two devices). Fix mechanism added to all user to replace an electronic module easily. For example, suppose a servo controller dies on your robot. You replace the broken module with a new module, which has a different serial number from the original servo controller. You can use the Fix button to automatically reconfigure your configuration file to use the serial number of the new module. Improvements made to fix resiliency and responsiveness of the system. For LinearOpMode the user now must for a telemetry.update() to update the telemetry data on the driver station. This update() mechanism ensures that the driver station gets the updated data properly and at the same time. The Auto Configure function of the Robot Controller is now template based. If there is a commonly used robot configuration, a template can be created so that the Auto Configure mechanism can be used to quickly configure a robot of this type. The logic to detect a runaway op mode (both in the LinearOpMode and OpMode types) and to abort the run, then auto recover has been improved/implemented. Fix has been incorporated so that Logitech F310 gamepad mappings will be correct for Marshmallow users. Release 16.07.08 For the ftc_app project, the gradle files have been modified to support Android Studio 2.1.x. Release 16.03.30 For the MIT App Inventor, the design blocks have new icons that better represent the function of each design component. Some changes were made to the shutdown logic to ensure the robust shutdown of some of our USB services. A change was made to LinearOpMode so as to allow a given instance to be executed more than once, which is required for the App Inventor. Javadoc improved/updated. Release 16.03.09 Changes made to make the FTC SDK synchronous (significant change!) waitOneFullHardwareCycle() and waitForNextHardwareCycle() are no longer needed and have been deprecated. runOpMode() (for a LinearOpMode) is now decoupled from the system's hardware read/write thread. loop() (for an OpMode) is now decoupled from the system's hardware read/write thread. Methods are synchronous. For example, if you call setMode(DcMotorController.RunMode.RESET_ENCODERS) for a motor, the encoder is guaranteed to be reset when the method call is complete. For legacy module (NXT compatible), user no longer has to toggle between read and write modes when reading from or writing to a legacy device. Changes made to enhance reliability/robustness during ESD event. Changes made to make code thread safe. Debug keystore added so that user-generated robot controller APKs will all use the same signed key (to avoid conflicts if a team has multiple developer laptops for example). Firmware version information for Modern Robotics modules are now logged. Changes made to improve USB comm reliability and robustness. Added support for voltage indicator for legacy (NXT-compatible) motor controllers. Changes made to provide auto stop capabilities for op modes. A LinearOpMode class will stop when the statements in runOpMode() are complete. User does not have to push the stop button on the driver station. If an op mode is stopped by the driver station, but there is a run away/uninterruptible thread persisting, the app will log an error message then force itself to crash to stop the runaway thread. Driver Station UI modified to display lowest measured voltage below current voltage (12V battery). Driver Station UI modified to have color background for current voltage (green=good, yellow=caution, red=danger, extremely low voltage). javadoc improved (edits and additional classes). Added app build time to About activity for driver station and robot controller apps. Display local IP addresses on Driver Station About activity. Added I2cDeviceSynchImpl. Added I2cDeviceSync interface. Added seconds() and milliseconds() to ElapsedTime for clarity. Added getCallbackCount() to I2cDevice. Added missing clearI2cPortActionFlag. Added code to create log messages while waiting for LinearOpMode shutdown. Fix so Wifi Direct Config activity will no longer launch multiple times. Added the ability to specify an alternate i2c address in software for the Modern Robotics gyro. Release 16.02.09 Improved battery checker feature so that voltage values get refreshed regularly (every 250 msec) on Driver Station (DS) user interface. Improved software so that Robot Controller (RC) is much more resilient and “self-healing” to USB disconnects: If user attempts to start/restart RC with one or more module missing, it will display a warning but still start up. When running an op mode, if one or more modules gets disconnected, the RC & DS will display warnings,and robot will keep on working in spite of the missing module(s). If a disconnected module gets physically reconnected the RC will auto detect the module and the user will regain control of the recently connected module. Warning messages are more helpful (identifies the type of module that’s missing plus its USB serial number). Code changes to fix the null gamepad reference when users try to reference the gamepads in the init() portion of their op mode. NXT light sensor output is now properly scaled. Note that teams might have to readjust their light threshold values in their op modes. On DS user interface, gamepad icon for a driver will disappear if the matching gamepad is disconnected or if that gamepad gets designated as a different driver. Robot Protocol (ROBOCOL) version number info is displayed in About screen on RC and DS apps. Incorporated a display filter on pairing screen to filter out devices that don’t use the “-“ format. This filter can be turned off to show all WiFi Direct devices. Updated text in License file. Fixed formatting error in OpticalDistanceSensor.toString(). Fixed issue on with a blank (“”) device name that would disrupt WiFi Direct Pairing. Made a change so that the WiFi info and battery info can be displayed more quickly on the DS upon connecting to RC. Improved javadoc generation. Modified code to make it easier to support language localization in the future. Release 16.01.04 Updated compileSdkVersion for apps Prevent Wifi from entering power saving mode removed unused import from driver station Corrrected "Dead zone" joystick code. LED.getDeviceName and .getConnectionInfo() return null apps check for ROBOCOL_VERSION mismatch Fix for Telemetry also has off-by-one errors in its data string sizing / short size limitations error User telemetry output is sorted. added formatting variants to DbgLog and RobotLog APIs code modified to allow for a long list of op mode names. changes to improve thread safety of RobocolDatagramSocket Fix for "missing hardware leaves robot controller disconnected from driver station" error fix for "fast tapping of Init/Start causes problems" (toast is now only instantiated on UI thread). added some log statements for thread life cycle. moved gamepad reset logic inside of initActiveOpMode() for robustness changes made to mitigate risk of race conditions on public methods. changes to try and flag when WiFi Direct name contains non-printable characters. fix to correct race condition between .run() and .close() in ReadWriteRunnableStandard. updated FTDI driver made ReadWriteRunnableStanard interface public. fixed off-by-one errors in Command constructor moved specific hardware implmentations into their own package. moved specific gamepad implemnatations to the hardware library. changed LICENSE file to new BSD version. fixed race condition when shutting down Modern Robotics USB devices. methods in the ColorSensor classes have been synchronized. corrected isBusy() status to reflect end of motion. corrected "back" button keycode. the notSupported() method of the GyroSensor class was changed to protected (it should not be public). Release 15.11.04.001 Added Support for Modern Robotics Gyro. The GyroSensor class now supports the MR Gyro Sensor. Users can access heading data (about Z axis) Users can also access raw gyro data (X, Y, & Z axes). Example MRGyroTest.java op mode included. Improved error messages More descriptive error messages for exceptions in user code. Updated DcMotor API Enable read mode on new address in setI2cAddress Fix so that driver station app resets the gamepads when switching op modes. USB-related code changes to make USB comm more responsive and to display more explicit error messages. Fix so that USB will recover properly if the USB bus returns garbage data. Fix USB initializtion race condition. Better error reporting during FTDI open. More explicit messages during USB failures. Fixed bug so that USB device is closed if event loop teardown method was not called. Fixed timer UI issue Fixed duplicate name UI bug (Legacy Module configuration). Fixed race condition in EventLoopManager. Fix to keep references stable when updating gamepad. For legacy Matrix motor/servo controllers removed necessity of appending "Motor" and "Servo" to controller names. Updated HT color sensor driver to use constants from ModernRoboticsUsbLegacyModule class. Updated MR color sensor driver to use constants from ModernRoboticsUsbDeviceInterfaceModule class. Correctly handle I2C Address change in all color sensors Updated/cleaned up op modes. Updated comments in LinearI2cAddressChange.java example op mode. Replaced the calls to "setChannelMode" with "setMode" (to match the new of the DcMotor method). Removed K9AutoTime.java op mode. Added MRGyroTest.java op mode (demonstrates how to use MR Gyro Sensor). Added MRRGBExample.java op mode (demonstrates how to use MR Color Sensor). Added HTRGBExample.java op mode (demonstrates how to use HT legacy color sensor). Added MatrixControllerDemo.java (demonstrates how to use legacy Matrix controller). Updated javadoc documentation. Updated release .apk files for Robot Controller and Driver Station apps. Release 15.10.06.002 Added support for Legacy Matrix 9.6V motor/servo controller. Cleaned up build.gradle file. Minor UI and bug fixes for driver station and robot controller apps. Throws error if Ultrasonic sensor (NXT) is not configured for legacy module port 4 or 5. Release 15.08.03.001 New user interfaces for FTC Driver Station and FTC Robot Controller apps. An init() method is added to the OpMode class. For this release, init() is triggered right before the start() method. Eventually, the init() method will be triggered when the user presses an "INIT" button on driver station. The init() and loop() methods are now required (i.e., need to be overridden in the user's op mode). The start() and stop() methods are optional. A new LinearOpMode class is introduced. Teams can use the LinearOpMode mode to create a linear (not event driven) program model. Teams can use blocking statements like Thread.sleep() within a linear op mode. The API for the Legacy Module and Core Device Interface Module have been updated. Support for encoders with the Legacy Module is now working. The hardware loop has been updated for better performance.

    From user chrisneagu

  • datacamp / covid-19-eda-tutorial

    Github-Explorer, This tutorial's purpose is to introduce people to the [2019 Novel Coronavirus COVID-19 (2019-nCoV) Data Repository by Johns Hopkins CSSE](https://github.com/CSSEGISandData/COVID-19) and how to explore it using some foundational packages in the Scientific Python Data Science stack.

    From organization datacamp

  • erikmonjas / css-grid-12-column-layout

    Github-Explorer, https://erikmonjas.github.io/css-grid-12-column-layout/ 12 column grid using display:grid. It includes the necessary prefixes to work in Internet Explorer.

    From user erikmonjas

  • isamarvin / git-github_learning_hub

    Github-Explorer, Welcome to my personal Git and GitHub notes repository. This project is all about exploring Git, a powerful tool for tracking changes in coding projects, and GitHub, the go-to platform for collaborating and sharing code with others. Here, you'll find a comprehensive collection of my thoughts and learnings on these topics.

    From user isamarvin

  • jettbrains / -l-

    Github-Explorer, W3C Strategic Highlights September 2019 This report was prepared for the September 2019 W3C Advisory Committee Meeting (W3C Member link). See the accompanying W3C Fact Sheet — September 2019. For the previous edition, see the April 2019 W3C Strategic Highlights. For future editions of this report, please consult the latest version. A Chinese translation is available. ☰ Contents Introduction Future Web Standards Meeting Industry Needs Web Payments Digital Publishing Media and Entertainment Web & Telecommunications Real-Time Communications (WebRTC) Web & Networks Automotive Web of Things Strengthening the Core of the Web HTML CSS Fonts SVG Audio Performance Web Performance WebAssembly Testing Browser Testing and Tools WebPlatform Tests Web of Data Web for All Security, Privacy, Identity Internationalization (i18n) Web Accessibility Outreach to the world W3C Developer Relations W3C Training Translations W3C Liaisons Introduction This report highlights recent work of enhancement of the existing landscape of the Web platform and innovation for the growth and strength of the Web. 33 working groups and a dozen interest groups enable W3C to pursue its mission through the creation of Web standards, guidelines, and supporting materials. We track the tremendous work done across the Consortium through homogeneous work-spaces in Github which enables better monitoring and management. We are in the middle of a period where we are chartering numerous working groups which demonstrate the rapid degree of change for the Web platform: After 4 years, we are nearly ready to publish a Payment Request API Proposed Recommendation and we need to soon charter follow-on work. In the last year we chartered the Web Payment Security Interest Group. In the last year we chartered the Web Media Working Group with 7 specifications for next generation Media support on the Web. We have Accessibility Guidelines under W3C Member review which includes Silver, a new approach. We have just launched the Decentralized Identifier Working Group which has tremendous potential because Decentralized Identifier (DID) is an identifier that is globally unique, resolveable with high availability, and cryptographically verifiable. We have Privacy IG (PING) under W3C Member review which strengthens our focus on the tradeoff between privacy and function. We have a new CSS charter under W3C Member review which maps the group's work for the next three years. In this period, W3C and the WHATWG have succesfully completed the negotiation of a Memorandum of Understanding rooted in the mutual belief that that having two distinct specifications claiming to be normative is generally harmful for the Web community. The MOU, signed last May, describes how the two organizations are to collaborate on the development of a single authoritative version of the HTML and DOM specifications. W3C subsequently rechartered the HTML Working Group to assist the W3C community in raising issues and proposing solutions for the HTML and DOM specifications, and for the production of W3C Recommendations from WHATWG Review Drafts. As the Web evolves continuously, some groups are looking for ways for specifications to do so as well. So-called "evergreen recommendations" or "living standards" aim to track continuous development (and maintenance) of features, on a feature-by-feature basis, while getting review and patent commitments. We see the maturation and further development of an incredible number of new technologies coming to the Web. Continued progress in many areas demonstrates the vitality of the W3C and the Web community, as the rest of the report illustrates. Future Web Standards W3C has a variety of mechanisms for listening to what the community thinks could become good future Web standards. These include discussions with the Membership, discussions with other standards bodies, the activities of thousands of participants in over 300 community groups, and W3C Workshops. There are lots of good ideas. The W3C strategy team has been identifying promising topics and invites public participation. Future, recent and under consideration Workshops include: Inclusive XR (5-6 November 2019, Seattle, WA, USA) to explore existing and future approaches on making Virtual and Augmented Reality experiences more inclusive, including to people with disabilities; W3C Workshop on Data Models for Transportation (12-13 September 2019, Palo Alto, CA, USA) W3C Workshop on Web Games (27-28 June 2019, Redmond, WA, USA), view report Second W3C Workshop on the Web of Things (3-5 June 2019, Munich, Germany) W3C Workshop on Web Standardization for Graph Data; Creating Bridges: RDF, Property Graph and SQL (4-6 March 2019, Berlin, Germany), view report Web & Machine Learning. The Strategy Funnel documents the staff's exploration of potential new work at various phases: Exploration and Investigation, Incubation and Evaluation, and eventually to the chartering of a new standards group. The Funnel view is a GitHub Project where new area are issues represented by “cards” which move through the columns, usually from left to right. Most cards start in Exploration and move towards Chartering, or move out of the funnel. Public input is welcome at any stage but particularly once Incubation has begun. This helps W3C identify work that is sufficiently incubated to warrant standardization, to review the ecosystem around the work and indicate interest in participating in its standardization, and then to draft a charter that reflects an appropriate scope. Ongoing feedback can speed up the overall standardization process. Since the previous highlights document, W3C has chartered a number of groups, and started discussion on many more: Newly Chartered or Rechartered Web Application Security WG (03-Apr) Web Payment Security IG (17-Apr) Patent and Standards IG (24-Apr) Web Applications WG (14-May) Web & Networks IG (16-May) Media WG (23-May) Media and Entertainment IG (06-Jun) HTML WG (06-Jun) Decentralized Identifier WG (05-Sep) Extended Privacy IG (PING) (30-Sep) Verifiable Claims WG (30-Sep) Service Workers WG (31-Dec) Dataset Exchange WG (31-Dec) Web of Things Working Group (31-Dec) Web Audio Working Group (31-Dec) Proposed charters / Advance Notice Accessibility Guidelines WG Privacy IG (PING) RDF Literal Direction WG Timed Text WG CSS WG Web Authentication WG Closed Internationalization Tag Set IG Meeting Industry Needs Web Payments All Web Payments specifications W3C's payments standards enable a streamlined checkout experience, enabling a consistent user experience across the Web with lower front end development costs for merchants. Users can store and reuse information and more quickly and accurately complete online transactions. The Web Payments Working Group has republished Payment Request API as a Candidate Recommendation, aiming to publish a Proposed Recommendation in the Fall 2019, and is discussing use cases and features for Payment Request after publication of the 1.0 Recommendation. Browser vendors have been finalizing implementation of features added in the past year (view the implementation report). As work continues on the Payment Handler API and its implementation (currently in Chrome and Edge Canary), one focus in 2019 is to increase adoption in other browsers. Recently, Mastercard demonstrated the use of Payment Request API to carry out EMVCo's Secure Remote Commerce (SRC) protocol whose payment method definition is being developed with active participation by Visa, Mastercard, American Express, and Discover. Payment method availability is a key factor in merchant considerations about adopting Payment Request API. The ability to get uniform adoption of a new payment method such as Secure Remote Commerce (SRC) also depends on the availability of the Payment Handler API in browsers, or of proprietary alternatives. Web Monetization, which the Web Payments Working Group will discuss again at its face-to-face meeting in September, can be used to enable micropayments as an alternative revenue stream to advertising. Since the beginning of 2019, Amazon, Brave Software, JCB, Certus Cybersecurity Solutions and Netflix have joined the Web Payments Working Group. In April, W3C launched the Web Payment Security Group to enable W3C, EMVCo, and the FIDO Alliance to collaborate on a vision for Web payment security and interoperability. Participants will define areas of collaboration and identify gaps between existing technical specifications in order to increase compatibility among different technologies, such as: How do SRC, FIDO, and Payment Request relate? The Payment Services Directive 2 (PSD2) regulations in Europe are scheduled to take effect in September 2019. What is the role of EMVCo, W3C, and FIDO technologies, and what is the current state of readiness for the deadline? How can we improve privacy on the Web at the same time as we meet industry requirements regarding user identity? Digital Publishing All Digital Publishing specifications, Publication milestones The Web is the universal publishing platform. Publishing is increasingly impacted by the Web, and the Web increasingly impacts Publishing. Topic of particular interest to Publishing@W3C include typography and layout, accessibility, usability, portability, distribution, archiving, offline access, print on demand, and reliable cross referencing. And the diverse publishing community represented in the groups consist of the traditional "trade" publishers, ebook reading system manufacturers, but also publishers of audio book, scholarly journals or educational materials, library scientists or browser developers. The Publishing Working Group currently concentrates on Audiobooks which lack a comprehensive standard, thus incurring extra costs and time to publish in this booming market. Active development is ongoing on the future standard: Publication Manifest Audiobook profile for Web Publications Lightweight Packaging Format The BD Comics Manga Community Group, the Synchronized Multimedia for Publications Community Group, the Publishing Community Group and a future group on archival, are companions to the working group where specific work is developed and incubated. The Publishing Community Group is a recently launched incubation channel for Publishing@W3C. The goal of the group is to propose, document, and prototype features broadly related to: publications on the Web reading modes and systems and the user experience of publications The EPUB 3 Community Group has successfully completed the revision of EPUB 3.2. The Publishing Business Group fosters ongoing participation by members of the publishing industry and the overall ecosystem in the development of Web infrastructure to better support the needs of the industry. The Business Group serves as an additional conduit to the Publishing Working Group and several Community Groups for feedback between the publishing ecosystem and W3C. The Publishing BG has played a vital role in fostering and advancing the adoption and continued development of EPUB 3. In particular the BG provided critical support to the update of EPUBCheck to validate EPUB content to the new EPUB 3.2 specification. This resulted in the development, in conjunction with the EPUB3 Community Group, of a new generation of EPUBCheck, i.e., EPUBCheck 4.2 production-ready release. Media and Entertainment All Media specifications The Media and Entertainment vertical tracks media-related topics and features that create immersive experiences for end users. HTML5 brought standard audio and video elements to the Web. Standardization activities since then have aimed at turning the Web into a professional platform fully suitable for the delivery of media content and associated materials, enabling missing features to stream video content on the Web such as adaptive streaming and content protection. Together with Microsoft, Comcast, Netflix and Google, W3C received an Technology & Engineering Emmy Award in April 2019 for standardization of a full TV experience on the Web. Current goals are to: Reinforce core media technologies: Creation of the Media Working Group, to develop media-related specifications incubated in the WICG (e.g. Media Capabilities, Picture-in-picture, Media Session) and maintain maintain/evolve Media Source Extensions (MSE) and Encrypted Media Extensions (EME). Improve support for Media Timed Events: data cues incubation. Enhance color support (HDR, wide gamut), in scope of the CSS WG and in the Color on the Web CG. Reduce fragmentation: Continue annual releases of a common and testable baseline media devices, in scope of the Web Media APIs CG and in collaboration with the CTA WAVE Project. Maintain the Road-map of Media Technologies for the Web which highlights Web technologies that can be used to build media applications and services, as well as known gaps to enable additional use cases. Create the future: Discuss perspectives for Media and Entertainment for the Web. Bring the power of GPUs to the Web (graphics, machine learning, heavy processing), under incubation in the GPU for the Web CG. Transition to a Working Group is under discussion. Determine next steps after the successful W3C Workshop on Web Games of June 2019. View the report. Timed Text The Timed Text Working Group develops and maintains formats used for the representation of text synchronized with other timed media, like audio and video, and notably works on TTML, profiles of TTML, and WebVTT. Recent progress includes: A robust WebVTT implementation report poises the specification for publication as a proposed recommendation. Discussions around re-chartering, notably to add a TTML Profile for Audio Description deliverable to the scope of the group, and clarify that rendering of captions within XR content is also in scope. Immersive Web Hardware that enables Virtual Reality (VR) and Augmented Reality (AR) applications are now broadly available to consumers, offering an immersive computing platform with both new opportunities and challenges. The ability to interact directly with immersive hardware is critical to ensuring that the web is well equipped to operate as a first-class citizen in this environment. The Immersive Web Working Group has been stabilizing the WebXR Device API while the companion Immersive Web Community Group incubates the next series of features identified as key for the future of the Immersive Web. W3C plans a workshop focused on the needs and benefits at the intersection of VR & Accessibility (Inclusive XR), on 5-6 November 2019 in Seattle, WA, USA, to explore existing and future approaches on making Virtual and Augmented Reality experiences more inclusive. Web & Telecommunications The Web is the Open Platform for Mobile. Telecommunication service providers and network equipment providers have long been critical actors in the deployment of Web technologies. As the Web platform matures, it brings richer and richer capabilities to extend existing services to new users and devices, and propose new and innovative services. Real-Time Communications (WebRTC) All Real-Time Communications specifications WebRTC has reshaped the whole communication landscape by making any connected device a potential communication end-point, bringing audio and video communications anywhere, on any network, vastly expanding the ability of operators to reach their customers. WebRTC serves as the corner-stone of many online communication and collaboration services. The WebRTC Working Group aims to bringing WebRTC 1.0 (and companion specification Media Capture and Streams) to Recommendation by the end of 2019. Intense efforts are focused on testing (supported by a dedicated hackathon at IETF 104) and interoperability. The group is considering pushing features that have not gotten enough traction to separate modules or to a later minor revision of the spec. Beyond WebRTC 1.0, the WebRTC Working Group will focus its efforts on WebRTC NV which the group has started documenting by identifying use cases. Web & Networks Recently launched, in the wake of the May 2018 Web5G workshop, the Web & Networks Interest Group is chaired by representatives from AT&T, China Mobile and Intel, with a goal to explore solutions for web applications to achieve better performance and resource allocation, both on the device and network. The group's first efforts are around use cases, privacy & security requirements and liaisons. Automotive All Automotive specifications To create a rich application ecosystem for vehicles and other devices allowed to connect to the vehicle, the W3C Automotive Working Group is delivering a service specification to expose all common vehicle signals (engine temperature, fuel/charge level, range, tire pressure, speed, etc.) The Vehicle Information Service Specification (VISS), which is a Candidate Recommendation, is seeing more implementations across the industry. It provides the access method to a common data model for all the vehicle signals –presently encapsulating a thousand or so different data elements– and will be growing to accommodate the advances in automotive such as autonomous and driver assist technologies and electrification. The group is already working on a successor to VISS, leveraging the underlying data model and the VIWI submission from Volkswagen, for a more robust means of accessing vehicle signals information and the same paradigm for other automotive needs including location-based services, media, notifications and caching content. The Automotive and Web Platform Business Group acts as an incubator for prospective standards work. One of its task forces is using W3C VISS in performing data sampling and off-boarding the information to the cloud. Access to the wealth of information that W3C's auto signals standard exposes is of interest to regulators, urban planners, insurance companies, auto manufacturers, fleet managers and owners, service providers and others. In addition to components needed for data sampling and edge computing, capturing user and owner consent, information collection methods and handling of data are in scope. The upcoming W3C Workshop on Data Models for Transportation (September 2019) is expected to focus on the need of additional ontologies around transportation space. Web of Things All Web of Things specifications W3C's Web of Things work is designed to bridge disparate technology stacks to allow devices to work together and achieve scale, thus enabling the potential of the Internet of Things by eliminating fragmentation and fostering interoperability. Thing descriptions expressed in JSON-LD cover the behavior, interaction affordances, data schema, security configuration, and protocol bindings. The Web of Things complements existing IoT ecosystems to reduce the cost and risk for suppliers and consumers of applications that create value by combining multiple devices and information services. There are many sectors that will benefit, e.g. smart homes, smart cities, smart industry, smart agriculture, smart healthcare and many more. The Web of Things Working Group is finishing the initial Web of Things standards, with support from the Web of Things Interest Group: Web of Things Architecture Thing Descriptions Strengthening the Core of the Web HTML The HTML Working Group was chartered early June to assist the W3C community in raising issues and proposing solutions for the HTML and DOM specifications, and to produce W3C Recommendations from WHATWG Review Drafts. A few days before, W3C and the WHATWG signed a Memorandum of Understanding outlining the agreement to collaborate on the development of a single version of the HTML and DOM specifications. Issues and proposed solutions for HTML and DOM done via the newly rechartered HTML Working Group in the WHATWG repositories The HTML Working Group is targetting November 2019 to bring HTML and DOM to Candidate Recommendations. CSS All CSS specifications CSS is a critical part of the Open Web Platform. The CSS Working Group gathers requirements from two large groups of CSS users: the publishing industry and application developers. Within W3C, those groups are exemplified by the Publishing groups and the Web Platform Working Group. The former requires things like better pagination support and advanced font handling, the latter needs intelligent (and fast!) scrolling and animations. What we know as CSS is actually a collection of almost a hundred specifications, referred to as ‘modules’. The current state of CSS is defined by a snapshot, updated once a year. The group also publishes an index defining every term defined by CSS specifications. Fonts All Fonts specifications The Web Fonts Working Group develops specifications that allow the interoperable deployment of downloadable fonts on the Web, with a focus on Progressive Font Enrichment as well as maintenance of WOFF Recommendations. Recent and ongoing work includes: Early API experiments by Adobe and Monotype have demonstrated the feasibility of a font enrichment API, where a server delivers a font with minimal glyph repertoire and the client can query the full repertoire and request additional subsets on-the-fly. In other experiments, the Brotli compression used in WOFF 2 was extended to support shared dictionaries and patch update. Metrics to quantify improvement are a current hot discussion topic. The group will meet at ATypi 2019 in Japan, to gather requirements from the international typography community. The group will first produce a report summarizing the strengths and weaknesses of each prototype solution by Q2 2020. SVG All SVG specifications SVG is an important and widely-used part of the Open Web Platform. The SVG Working Group focuses on aligning the SVG 2.0 specification with browser implementations, having split the specification into a currently-implemented 2.0 and a forward-looking 2.1. Current activity is on stabilization, increased integration with the Open Web Platform, and test coverage analysis. The Working Group was rechartered in March 2019. A new work item concerns native (non-Web-browser) uses of SVG as a non-interactive, vector graphics format. Audio The Web Audio Working Group was extended to finish its work on the Web Audio API, expecting to publish it as a Recommendation by year end. The specification enables synthesizing audio in the browser. Audio operations are performed with audio nodes, which are linked together to form a modular audio routing graph. Multiple sources — with different types of channel layout — are supported. This modular design provides the flexibility to create complex audio functions with dynamic effects. The first version of Web Audio API is now feature complete and is implemented in all modern browsers. Work has started on the next version, and new features are being incubated in the Audio Community Group. Performance Web Performance All Web Performance specifications There are currently 18 specifications in development in the Web Performance Working Group aiming to provide methods to observe and improve aspects of application performance of user agent features and APIs. The W3C team is looking at related work incubated in the W3C GPU for the Web (WebGPU) Community Group which is poised to transition to a W3C Working Group. A preliminary draft charter is available. WebAssembly All WebAssembly specifications WebAssembly improves Web performance and power by being a virtual machine and execution environment enabling loaded pages to run native (compiled) code. It is deployed in Firefox, Edge, Safari and Chrome. The specification will soon reach Candidate Recommendation. WebAssembly enables near-native performance, optimized load time, and perhaps most importantly, a compilation target for existing code bases. While it has a small number of native types, much of the performance increase relative to Javascript derives from its use of consistent typing. WebAssembly leverages decades of optimization for compiled languages and the byte code is optimized for compactness and streaming (the web page starts executing while the rest of the code downloads). Network and API access all occurs through accompanying Javascript libraries -- the security model is identical to that of Javascript. Requirements gathering and language development occur in the Community Group while the Working Group manages test development, community review and progression of specifications on the Recommendation Track. Testing Browser testing plays a critical role in the growth of the Web by: Improving the reliability of Web technology definitions; Improving the quality of implementations of these technologies by helping vendors to detect bugs in their products; Improving the data available to Web developers on known bugs and deficiencies of Web technologies by publishing results of these tests. Browser Testing and Tools The Browser Testing and Tools Working Group is developing WebDriver version 2, having published last year the W3C Recommendation of WebDriver. WebDriver acts as a remote control interface that enables introspection and control of user agents, provides a platform- and language-neutral wire protocol as a way for out-of-process programs to remotely instruct the behavior of Web, and emulates the actions of a real person using the browser. WebPlatform Tests The WebPlatform Tests project now provides a mechanism which allows to fully automate tests that previously needed to be run manually: TestDriver. TestDriver enables sending trusted key and mouse events, sending complex series of trusted pointer and key interactions for things like in-content drag-and-drop or pinch zoom, and even file upload. Since 2014 W3C began work on this coordinated open-source effort to build a cross-browser test suite for the Web Platform, which WHATWG, and all major browsers adopted. Web of Data All Data specifications There have been several great success stories around the standardization of data on the web over the past year. Verifiable Claims seems to have significant uptake. It is also significant that the Distributed Identifier WG charter has received numerous favorable reviews, and was just recently launched. JSON-LD has been a major success with the large deployment on Web sites via schema.org. JSON-LD 1.1 completed technical work, about to transition to CR More than 25% of websites today include schema.org data in JSON-LD The Web of Things description is in CR since May, making use of JSON-LD Verifiable Credentials data model is in CR since July, also making use of JSON-LD Continued strong interest in decentralized identifiers Engagement from the TAG with reframing core documents, such as Ethical Web Principles, to include data on the web within their scope Data is increasingly important for all organizations, especially with the rise of IoT and Big Data. W3C has a mature and extensive suite of standards relating to data that were developed over two decades of experience, with plans for further work on making it easier for developers to work with graph data and knowledge graphs. Linked Data is about the use of URIs as names for things, the ability to dereference these URIs to get further information and to include links to other data. There are ever-increasing sources of open Linked Data on the Web, as well as data services that are restricted to the suppliers and consumers of those services. The digital transformation of industry is seeking to exploit advanced digital technologies. This will facilitate businesses to integrate horizontally along the supply and value chains, and vertically from the factory floor to the office floor. W3C is seeking to make it easier to support enterprise-wide data management and governance, reflecting the strategic importance of data to modern businesses. Traditional approaches to data have focused on tabular databases (SQL/RDBMS), Comma Separated Value (CSV) files, and data embedded in PDF documents and spreadsheets. We're now in midst of a major shift to graph data with nodes and labeled directed links between them. Graph data is: Faster than using SQL and associated JOIN operations More favorable to integrating data from heterogeneous sources Better suited to situations where the data model is evolving In the wake of the recent W3C Workshop on Graph Data we are in the process of launching a Graph Standardization Business Group to provide a business perspective with use cases and requirements, to coordinate technical standards work and liaisons with external organizations. Web for All Security, Privacy, Identity All Security specifications, all Privacy specifications Authentication on the Web As the WebAuthn Level 1 W3C Recommendation published last March is seeing wide implementation and adoption of strong cryptographic authentication, work is proceeding on Level 2. The open standard Web API gives native authentication technology built into native platforms, browsers, operating systems (including mobile) and hardware, offering protection against hacking, credential theft, phishing attacks, thus aiming to end the era of passwords as a security construct. You may read more in our March press release. Privacy An increasing number of W3C specifications are benefitting from Privacy and Security review; there are security and privacy aspects to every specification. Early review is essential. Working with the TAG, the Privacy Interest Group has updated the Self-Review Questionnaire: Security and Privacy. Other recent work of the group includes public blogging further to the exploration of anti-patterns in standards and permission prompts. Security The Web Application Security Working Group adopted Feature Policy, aiming to allow developers to selectively enable, disable, or modify the behavior of some of these browser features and APIs within their application; and Fetch Metadata, aiming to provide servers with enough information to make a priori decisions about whether or not to service a request based on the way it was made, and the context in which it will be used. The Web Payment Security Interest Group, launched last April, convenes members from W3C, EMVCo, and the FIDO Alliance to discuss cooperative work to enhance the security and interoperability of Web payments (read more about payments). Internationalization (i18n) All Internationalization specifications, educational articles related to Internationalization, spec developers checklist Only a quarter or so current Web users use English online and that proportion will continue to decrease as the Web reaches more and more communities of limited English proficiency. If the Web is to live up to the "World Wide" portion of its name, and for the Web to truly work for stakeholders all around the world engaging with content in various languages, it must support the needs of worldwide users as they engage with content in the various languages. The growth of epublishing also brings requirements for new features and improved typography on the Web. It is important to ensure the needs of local communities are captured. The W3C Internationalization Initiative was set up to increase in-house resources dedicated to accelerating progress in making the World Wide Web "worldwide" by gathering user requirements, supporting developers, and education & outreach. For an overview of current projects see the i18n radar. W3C's Internationalization efforts progressed on a number of fronts recently: Requirements: New African and European language groups will work on the gap analysis, errata and layout requirements. Gap analysis: Japanese, Devanagari, Bengali, Tamil, Lao, Khmer, Javanese, and Ethiopic updated in the gap-analysis documents. Layout requirements document: notable progress tracked in the Southeast Asian Task Force while work continues on Chinese layout requirements. Developer support: Spec reviews: the i18n WG continues active review of specifications of the WHATWG and other W3C Working Groups. Short review checklist: easy way to begin a self-review to help spec developers understand what aspects of their spec are likely to need attention for internationalization, and points them to more detailed checklists for the relevant topics. It also helps those reviewing specs for i18n issues. Strings on the Web: Language and Direction Metadata lays out issues and discusses potential solutions for passing information about language and direction with strings in JSON or other data formats. The document was rewritten for clarity, and expanded. The group is collaborating with the JSON-LD and Web Publishing groups to develop a plan for updating RDF, JSON-LD and related specifications to handle metadata for base direction of text (bidi). User-friendly test format: a new format was developed for Internationalization Test Suite tests, which displays helpful information about how the test works. This particularly useful because those tests are pointed to by educational materials and gap-analysis documents. Web Platform Tests: a large number of tests in the i18n test suite have been ported to the WPT repository, including: css-counter-styles, css-ruby, css-syntax, css-test, css-text-decor, css-writing-modes, and css-pseudo. Education & outreach: (for all educational materials, see the HTML & CSS Authoring Techniques) Web Accessibility All Accessibility specifications, WAI resources The Web Accessibility Initiative supports W3C's Web for All mission. Recent achievements include: Education and training: Inaccessibility of CAPTCHA updated to bring our analysis and recommendations up to date with CAPTCHA practice today, concluding two years of extensive work and invaluable input from the public (read more on the W3C Blog Learn why your web content and applications should be accessible. The Education and Outreach Working Group has completed revision and updating of the Business Case for Digital Accessibility. Accessibility guidelines: The Accessibility Guidelines Working Group has continued to update WCAG Techniques and Understanding WCAG 2.1; and published a Candidate Recommendation of Accessibility Conformance Testing Rules Format 1.0 to improve inter-rater reliability when evaluating conformance of web content to WCAG An updated charter is being developed to host work on "Silver", the next generation accessibility guidelines (WCAG 2.2) There are accessibility aspects to most specifications. Check your work with the FAST checklist. Outreach to the world W3C Developer Relations To foster the excellent feedback loop between Web Standards development and Web developers, and to grow participation from that diverse community, recent W3C Developer Relations activities include: @w3cdevs tracks the enormous amount of work happening across W3C W3C Track during the Web Conference 2019 in San Francisco Tech videos: W3C published the 2019 Web Games Workshop videos The 16 September 2019 Developer Meetup in Fukuoka, Japan, is open to all and will combine a set of technical demos prepared by W3C groups, and a series of talks on a selected set of W3C technologies and projects W3C is involved with Mozilla, Google, Samsung, Microsoft and Bocoup in the organization of ViewSource 2019 in Amsterdam (read more on the W3C Blog) W3C Training In partnership with EdX, W3C's MOOC training program, W3Cx offers a complete "Front-End Web Developer" (FEWD) professional certificate program that consists of a suite of five courses on the foundational languages that power the Web: HTML5, CSS and JavaScript. We count nearly 900K students from all over the world. Translations Many Web users rely on translations of documents developed at W3C whose official language is English. W3C is extremely grateful to the continuous efforts of its community in ensuring our various deliverables in general, and in our specifications in particular, are made available in other languages, for free, ensuring their exposure to a much more diverse set of readers. Last Spring we developed a more robust system, a new listing of translations of W3C specifications and updated the instructions on how to contribute to our translation efforts. W3C Liaisons Liaisons and coordination with numerous organizations and Standards Development Organizations (SDOs) is crucial for W3C to: make sure standards are interoperable coordinate our respective agenda in Internet governance: W3C participates in ICANN, GIPO, IGF, the I* organizations (ICANN, IETF, ISOC, IAB). ensure at the government liaison level that our standards work is officially recognized when important to our membership so that products based on them (often done by our members) are part of procurement orders. W3C has ARO/PAS status with ISO. W3C participates in the EU MSP and Rolling Plan on Standardization ensure the global set of Web and Internet standards form a compatible stack of technologies, at the technical and policy level (patent regime, fragmentation, use in policy making) promote Standards adoption equally by the industry, the public sector, and the public at large Coralie Mercier, Editor, W3C Marketing & Communications $Id: Overview.html,v 1.60 2019/10/15 12:05:52 coralie Exp $ Copyright © 2019 W3C ® (MIT, ERCIM, Keio, Beihang) Usage policies apply.

    From user jettbrains

  • learn-sys / cn

    Github-Explorer, A website providing info for self-learners who want to explore the world of operating systems. The website template is from https://github.com/kaiiiz/hexo-theme-book-demo. This repo is the Chinese version.

    From organization learn-sys

    Home Page: https://learn-sys.github.io/cn

  • manudecara / algo-trading

    Github-Explorer, This is my github repository where I post trading strategies, tutorials and research on quantitative finance with R, C++ and Python. Some of the topics explored include: machine learning, high frequency trading, NLP, technical analysis and more. Hope you enjoy it!

    From user manudecara

  • metaspartan / explorer

    Github-Explorer, Ethereum Block Explorer (ETHExplorer V2) - Realtime Price Ticker, Shapeshift.io Integration, etc. (Project is currently not under active development, if you have a bug fix, please open a PR) My current project can be found at https://github.com/metaspartan/denarius (D a better cryptocurrency than ETH)

    From user metaspartan

  • molyswu / hand_detection

    Github-Explorer, using Neural Networks (SSD) on Tensorflow. This repo documents steps and scripts used to train a hand detector using Tensorflow (Object Detection API). As with any DNN based task, the most expensive (and riskiest) part of the process has to do with finding or creating the right (annotated) dataset. I was interested mainly in detecting hands on a table (egocentric view point). I experimented first with the [Oxford Hands Dataset](http://www.robots.ox.ac.uk/~vgg/data/hands/) (the results were not good). I then tried the [Egohands Dataset](http://vision.soic.indiana.edu/projects/egohands/) which was a much better fit to my requirements. The goal of this repo/post is to demonstrate how neural networks can be applied to the (hard) problem of tracking hands (egocentric and other views). Better still, provide code that can be adapted to other uses cases. If you use this tutorial or models in your research or project, please cite [this](#citing-this-tutorial). Here is the detector in action. <img src="images/hand1.gif" width="33.3%"><img src="images/hand2.gif" width="33.3%"><img src="images/hand3.gif" width="33.3%"> Realtime detection on video stream from a webcam . <img src="images/chess1.gif" width="33.3%"><img src="images/chess2.gif" width="33.3%"><img src="images/chess3.gif" width="33.3%"> Detection on a Youtube video. Both examples above were run on a macbook pro **CPU** (i7, 2.5GHz, 16GB). Some fps numbers are: | FPS | Image Size | Device| Comments| | ------------- | ------------- | ------------- | ------------- | | 21 | 320 * 240 | Macbook pro (i7, 2.5GHz, 16GB) | Run without visualizing results| | 16 | 320 * 240 | Macbook pro (i7, 2.5GHz, 16GB) | Run while visualizing results (image above) | | 11 | 640 * 480 | Macbook pro (i7, 2.5GHz, 16GB) | Run while visualizing results (image above) | > Note: The code in this repo is written and tested with Tensorflow `1.4.0-rc0`. Using a different version may result in [some errors](https://github.com/tensorflow/models/issues/1581). You may need to [generate your own frozen model](https://pythonprogramming.net/testing-custom-object-detector-tensorflow-object-detection-api-tutorial/?completed=/training-custom-objects-tensorflow-object-detection-api-tutorial/) graph using the [model checkpoints](model-checkpoint) in the repo to fit your TF version. **Content of this document** - Motivation - Why Track/Detect hands with Neural Networks - Data preparation and network training in Tensorflow (Dataset, Import, Training) - Training the hand detection Model - Using the Detector to Detect/Track hands - Thoughts on Optimizations. > P.S if you are using or have used the models provided here, feel free to reach out on twitter ([@vykthur](https://twitter.com/vykthur)) and share your work! ## Motivation - Why Track/Detect hands with Neural Networks? There are several existing approaches to tracking hands in the computer vision domain. Incidentally, many of these approaches are rule based (e.g extracting background based on texture and boundary features, distinguishing between hands and background using color histograms and HOG classifiers,) making them not very robust. For example, these algorithms might get confused if the background is unusual or in situations where sharp changes in lighting conditions cause sharp changes in skin color or the tracked object becomes occluded.(see [here for a review](https://www.cse.unr.edu/~bebis/handposerev.pdf) paper on hand pose estimation from the HCI perspective) With sufficiently large datasets, neural networks provide opportunity to train models that perform well and address challenges of existing object tracking/detection algorithms - varied/poor lighting, noisy environments, diverse viewpoints and even occlusion. The main drawbacks to usage for real-time tracking/detection is that they can be complex, are relatively slow compared to tracking-only algorithms and it can be quite expensive to assemble a good dataset. But things are changing with advances in fast neural networks. Furthermore, this entire area of work has been made more approachable by deep learning frameworks (such as the tensorflow object detection api) that simplify the process of training a model for custom object detection. More importantly, the advent of fast neural network models like ssd, faster r-cnn, rfcn (see [here](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md#coco-trained-models-coco-models) ) etc make neural networks an attractive candidate for real-time detection (and tracking) applications. Hopefully, this repo demonstrates this. > If you are not interested in the process of training the detector, you can skip straight to applying the [pretrained model I provide in detecting hands](#detecting-hands). Training a model is a multi-stage process (assembling dataset, cleaning, splitting into training/test partitions and generating an inference graph). While I lightly touch on the details of these parts, there are a few other tutorials cover training a custom object detector using the tensorflow object detection api in more detail[ see [here](https://pythonprogramming.net/training-custom-objects-tensorflow-object-detection-api-tutorial/) and [here](https://towardsdatascience.com/how-to-train-your-own-object-detector-with-tensorflows-object-detector-api-bec72ecfe1d9) ]. I recommend you walk through those if interested in training a custom object detector from scratch. ## Data preparation and network training in Tensorflow (Dataset, Import, Training) **The Egohands Dataset** The hand detector model is built using data from the [Egohands Dataset](http://vision.soic.indiana.edu/projects/egohands/) dataset. This dataset works well for several reasons. It contains high quality, pixel level annotations (>15000 ground truth labels) where hands are located across 4800 images. All images are captured from an egocentric view (Google glass) across 48 different environments (indoor, outdoor) and activities (playing cards, chess, jenga, solving puzzles etc). <img src="images/egohandstrain.jpg" width="100%"> If you will be using the Egohands dataset, you can cite them as follows: > Bambach, Sven, et al. "Lending a hand: Detecting hands and recognizing activities in complex egocentric interactions." Proceedings of the IEEE International Conference on Computer Vision. 2015. The Egohands dataset (zip file with labelled data) contains 48 folders of locations where video data was collected (100 images per folder). ``` -- LOCATION_X -- frame_1.jpg -- frame_2.jpg ... -- frame_100.jpg -- polygons.mat // contains annotations for all 100 images in current folder -- LOCATION_Y -- frame_1.jpg -- frame_2.jpg ... -- frame_100.jpg -- polygons.mat // contains annotations for all 100 images in current folder ``` **Converting data to Tensorflow Format** Some initial work needs to be done to the Egohands dataset to transform it into the format (`tfrecord`) which Tensorflow needs to train a model. This repo contains `egohands_dataset_clean.py` a script that will help you generate these csv files. - Downloads the egohands datasets - Renames all files to include their directory names to ensure each filename is unique - Splits the dataset into train (80%), test (10%) and eval (10%) folders. - Reads in `polygons.mat` for each folder, generates bounding boxes and visualizes them to ensure correctness (see image above). - Once the script is done running, you should have an images folder containing three folders - train, test and eval. Each of these folders should also contain a csv label document each - `train_labels.csv`, `test_labels.csv` that can be used to generate `tfrecords` Note: While the egohands dataset provides four separate labels for hands (own left, own right, other left, and other right), for my purpose, I am only interested in the general `hand` class and label all training data as `hand`. You can modify the data prep script to generate `tfrecords` that support 4 labels. Next: convert your dataset + csv files to tfrecords. A helpful guide on this can be found [here](https://pythonprogramming.net/creating-tfrecord-files-tensorflow-object-detection-api-tutorial/).For each folder, you should be able to generate `train.record`, `test.record` required in the training process. ## Training the hand detection Model Now that the dataset has been assembled (and your tfrecords), the next task is to train a model based on this. With neural networks, it is possible to use a process called [transfer learning](https://www.tensorflow.org/tutorials/image_retraining) to shorten the amount of time needed to train the entire model. This means we can take an existing model (that has been trained well on a related domain (here image classification) and retrain its final layer(s) to detect hands for us. Sweet!. Given that neural networks sometimes have thousands or millions of parameters that can take weeks or months to train, transfer learning helps shorten training time to possibly hours. Tensorflow does offer a few models (in the tensorflow [model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md#coco-trained-models-coco-models)) and I chose to use the `ssd_mobilenet_v1_coco` model as my start point given it is currently (one of) the fastest models (read the SSD research [paper here](https://arxiv.org/pdf/1512.02325.pdf)). The training process can be done locally on your CPU machine which may take a while or better on a (cloud) GPU machine (which is what I did). For reference, training on my macbook pro (tensorflow compiled from source to take advantage of the mac's cpu architecture) the maximum speed I got was 5 seconds per step as opposed to the ~0.5 seconds per step I got with a GPU. For reference it would take about 12 days to run 200k steps on my mac (i7, 2.5GHz, 16GB) compared to ~5hrs on a GPU. > **Training on your own images**: Please use the [guide provided by Harrison from pythonprogramming](https://pythonprogramming.net/training-custom-objects-tensorflow-object-detection-api-tutorial/) on how to generate tfrecords given your label csv files and your images. The guide also covers how to start the training process if training locally. [see [here] (https://pythonprogramming.net/training-custom-objects-tensorflow-object-detection-api-tutorial/)]. If training in the cloud using a service like GCP, see the [guide here](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_on_cloud.md). As the training process progresses, the expectation is that total loss (errors) gets reduced to its possible minimum (about a value of 1 or thereabout). By observing the tensorboard graphs for total loss(see image below), it should be possible to get an idea of when the training process is complete (total loss does not decrease with further iterations/steps). I ran my training job for 200k steps (took about 5 hours) and stopped at a total Loss (errors) value of 2.575.(In retrospect, I could have stopped the training at about 50k steps and gotten a similar total loss value). With tensorflow, you can also run an evaluation concurrently that assesses your model to see how well it performs on the test data. A commonly used metric for performance is mean average precision (mAP) which is single number used to summarize the area under the precision-recall curve. mAP is a measure of how well the model generates a bounding box that has at least a 50% overlap with the ground truth bounding box in our test dataset. For the hand detector trained here, the mAP value was **[email protected]**. mAP values range from 0-1, the higher the better. <img src="images/accuracy.jpg" width="100%"> Once training is completed, the trained inference graph (`frozen_inference_graph.pb`) is then exported (see the earlier referenced guides for how to do this) and saved in the `hand_inference_graph` folder. Now its time to do some interesting detection. ## Using the Detector to Detect/Track hands If you have not done this yet, please following the guide on installing [Tensorflow and the Tensorflow object detection api](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md). This will walk you through setting up the tensorflow framework, cloning the tensorflow github repo and a guide on - Load the `frozen_inference_graph.pb` trained on the hands dataset as well as the corresponding label map. In this repo, this is done in the `utils/detector_utils.py` script by the `load_inference_graph` method. ```python detection_graph = tf.Graph() with detection_graph.as_default(): od_graph_def = tf.GraphDef() with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid: serialized_graph = fid.read() od_graph_def.ParseFromString(serialized_graph) tf.import_graph_def(od_graph_def, name='') sess = tf.Session(graph=detection_graph) print("> ====== Hand Inference graph loaded.") ``` - Detect hands. In this repo, this is done in the `utils/detector_utils.py` script by the `detect_objects` method. ```python (boxes, scores, classes, num) = sess.run( [detection_boxes, detection_scores, detection_classes, num_detections], feed_dict={image_tensor: image_np_expanded}) ``` - Visualize detected bounding detection_boxes. In this repo, this is done in the `utils/detector_utils.py` script by the `draw_box_on_image` method. This repo contains two scripts that tie all these steps together. - detect_multi_threaded.py : A threaded implementation for reading camera video input detection and detecting. Takes a set of command line flags to set parameters such as `--display` (visualize detections), image parameters `--width` and `--height`, videe `--source` (0 for camera) etc. - detect_single_threaded.py : Same as above, but single threaded. This script works for video files by setting the video source parameter videe `--source` (path to a video file). ```cmd # load and run detection on video at path "videos/chess.mov" python detect_single_threaded.py --source videos/chess.mov ``` > Update: If you do have errors loading the frozen inference graph in this repo, feel free to generate a new graph that fits your TF version from the model-checkpoint in this repo. Use the [export_inference_graph.py](https://github.com/tensorflow/models/blob/master/research/object_detection/export_inference_graph.py) script provided in the tensorflow object detection api repo. More guidance on this [here](https://pythonprogramming.net/testing-custom-object-detector-tensorflow-object-detection-api-tutorial/?completed=/training-custom-objects-tensorflow-object-detection-api-tutorial/). ## Thoughts on Optimization. A few things that led to noticeable performance increases. - Threading: Turns out that reading images from a webcam is a heavy I/O event and if run on the main application thread can slow down the program. I implemented some good ideas from [Adrian Rosebuck](https://www.pyimagesearch.com/2017/02/06/faster-video-file-fps-with-cv2-videocapture-and-opencv/) on parrallelizing image capture across multiple worker threads. This mostly led to an FPS increase of about 5 points. - For those new to Opencv, images from the `cv2.read()` method return images in [BGR format](https://www.learnopencv.com/why-does-opencv-use-bgr-color-format/). Ensure you convert to RGB before detection (accuracy will be much reduced if you dont). ```python cv2.cvtColor(image_np, cv2.COLOR_BGR2RGB) ``` - Keeping your input image small will increase fps without any significant accuracy drop.(I used about 320 x 240 compared to the 1280 x 720 which my webcam provides). - Model Quantization. Moving from the current 32 bit to 8 bit can achieve up to 4x reduction in memory required to load and store models. One way to further speed up this model is to explore the use of [8-bit fixed point quantization](https://heartbeat.fritz.ai/8-bit-quantization-and-tensorflow-lite-speeding-up-mobile-inference-with-low-precision-a882dfcafbbd). Performance can also be increased by a clever combination of tracking algorithms with the already decent detection and this is something I am still experimenting with. Have ideas for optimizing better, please share! <img src="images/general.jpg" width="100%"> Note: The detector does reflect some limitations associated with the training set. This includes non-egocentric viewpoints, very noisy backgrounds (e.g in a sea of hands) and sometimes skin tone. There is opportunity to improve these with additional data. ## Integrating Multiple DNNs. One way to make things more interesting is to integrate our new knowledge of where "hands" are with other detectors trained to recognize other objects. Unfortunately, while our hand detector can in fact detect hands, it cannot detect other objects (a factor or how it is trained). To create a detector that classifies multiple different objects would mean a long involved process of assembling datasets for each class and a lengthy training process. > Given the above, a potential strategy is to explore structures that allow us **efficiently** interleave output form multiple pretrained models for various object classes and have them detect multiple objects on a single image. An example of this is with my primary use case where I am interested in understanding the position of objects on a table with respect to hands on same table. I am currently doing some work on a threaded application that loads multiple detectors and outputs bounding boxes on a single image. More on this soon.

    From user molyswu

  • openbiox / ucscxenashiny

    Github-Explorer, 📊 An R package for interactively exploring UCSC Xena https://xenabrowser.net/datapages/; Book: https://lishensuo.github.io/UCSCXenaShiny_Book; App online: https://shiny.hiplot.cn/ucsc-xena-shiny/, https://shiny.zhoulab.ac.cn/UCSCXenaShiny

    From organization openbiox

    Home Page: https://openbiox.github.io/UCSCXenaShiny/

  • rramatchandran / big-o-performance-java

    Github-Explorer, # big-o-performance A simple html app to demonstrate performance costs of data structures. - Clone the project - Navigate to the root of the project in a termina or command prompt - Run 'npm install' - Run 'npm start' - Go to the URL specified in the terminal or command prompt to try out the app. # This app was created from the Create React App NPM. Below are instructions from that project. Below you will find some information on how to perform common tasks. You can find the most recent version of this guide [here](https://github.com/facebookincubator/create-react-app/blob/master/template/README.md). ## Table of Contents - [Updating to New Releases](#updating-to-new-releases) - [Sending Feedback](#sending-feedback) - [Folder Structure](#folder-structure) - [Available Scripts](#available-scripts) - [npm start](#npm-start) - [npm run build](#npm-run-build) - [npm run eject](#npm-run-eject) - [Displaying Lint Output in the Editor](#displaying-lint-output-in-the-editor) - [Installing a Dependency](#installing-a-dependency) - [Importing a Component](#importing-a-component) - [Adding a Stylesheet](#adding-a-stylesheet) - [Post-Processing CSS](#post-processing-css) - [Adding Images and Fonts](#adding-images-and-fonts) - [Adding Bootstrap](#adding-bootstrap) - [Adding Flow](#adding-flow) - [Adding Custom Environment Variables](#adding-custom-environment-variables) - [Integrating with a Node Backend](#integrating-with-a-node-backend) - [Proxying API Requests in Development](#proxying-api-requests-in-development) - [Deployment](#deployment) - [Now](#now) - [Heroku](#heroku) - [Surge](#surge) - [GitHub Pages](#github-pages) - [Something Missing?](#something-missing) ## Updating to New Releases Create React App is divided into two packages: * `create-react-app` is a global command-line utility that you use to create new projects. * `react-scripts` is a development dependency in the generated projects (including this one). You almost never need to update `create-react-app` itself: it’s delegates all the setup to `react-scripts`. When you run `create-react-app`, it always creates the project with the latest version of `react-scripts` so you’ll get all the new features and improvements in newly created apps automatically. To update an existing project to a new version of `react-scripts`, [open the changelog](https://github.com/facebookincubator/create-react-app/blob/master/CHANGELOG.md), find the version you’re currently on (check `package.json` in this folder if you’re not sure), and apply the migration instructions for the newer versions. In most cases bumping the `react-scripts` version in `package.json` and running `npm install` in this folder should be enough, but it’s good to consult the [changelog](https://github.com/facebookincubator/create-react-app/blob/master/CHANGELOG.md) for potential breaking changes. We commit to keeping the breaking changes minimal so you can upgrade `react-scripts` painlessly. ## Sending Feedback We are always open to [your feedback](https://github.com/facebookincubator/create-react-app/issues). ## Folder Structure After creation, your project should look like this: ``` my-app/ README.md index.html favicon.ico node_modules/ package.json src/ App.css App.js index.css index.js logo.svg ``` For the project to build, **these files must exist with exact filenames**: * `index.html` is the page template; * `favicon.ico` is the icon you see in the browser tab; * `src/index.js` is the JavaScript entry point. You can delete or rename the other files. You may create subdirectories inside `src`. For faster rebuilds, only files inside `src` are processed by Webpack. You need to **put any JS and CSS files inside `src`**, or Webpack won’t see them. You can, however, create more top-level directories. They will not be included in the production build so you can use them for things like documentation. ## Available Scripts In the project directory, you can run: ### `npm start` Runs the app in the development mode.<br> Open [http://localhost:3000](http://localhost:3000) to view it in the browser. The page will reload if you make edits.<br> You will also see any lint errors in the console. ### `npm run build` Builds the app for production to the `build` folder.<br> It correctly bundles React in production mode and optimizes the build for the best performance. The build is minified and the filenames include the hashes.<br> Your app is ready to be deployed! ### `npm run eject` **Note: this is a one-way operation. Once you `eject`, you can’t go back!** If you aren’t satisfied with the build tool and configuration choices, you can `eject` at any time. This command will remove the single build dependency from your project. Instead, it will copy all the configuration files and the transitive dependencies (Webpack, Babel, ESLint, etc) right into your project so you have full control over them. All of the commands except `eject` will still work, but they will point to the copied scripts so you can tweak them. At this point you’re on your own. You don’t have to ever use `eject`. The curated feature set is suitable for small and middle deployments, and you shouldn’t feel obligated to use this feature. However we understand that this tool wouldn’t be useful if you couldn’t customize it when you are ready for it. ## Displaying Lint Output in the Editor >Note: this feature is available with `[email protected]` and higher. Some editors, including Sublime Text, Atom, and Visual Studio Code, provide plugins for ESLint. They are not required for linting. You should see the linter output right in your terminal as well as the browser console. However, if you prefer the lint results to appear right in your editor, there are some extra steps you can do. You would need to install an ESLint plugin for your editor first. >**A note for Atom `linter-eslint` users** >If you are using the Atom `linter-eslint` plugin, make sure that **Use global ESLint installation** option is checked: ><img src="http://i.imgur.com/yVNNHJM.png" width="300"> Then make sure `package.json` of your project ends with this block: ```js { // ... "eslintConfig": { "extends": "./node_modules/react-scripts/config/eslint.js" } } ``` Projects generated with `[email protected]` and higher should already have it. If you don’t need ESLint integration with your editor, you can safely delete those three lines from your `package.json`. Finally, you will need to install some packages *globally*: ```sh npm install -g eslint babel-eslint eslint-plugin-react eslint-plugin-import eslint-plugin-jsx-a11y eslint-plugin-flowtype ``` We recognize that this is suboptimal, but it is currently required due to the way we hide the ESLint dependency. The ESLint team is already [working on a solution to this](https://github.com/eslint/eslint/issues/3458) so this may become unnecessary in a couple of months. ## Installing a Dependency The generated project includes React and ReactDOM as dependencies. It also includes a set of scripts used by Create React App as a development dependency. You may install other dependencies (for example, React Router) with `npm`: ``` npm install --save <library-name> ``` ## Importing a Component This project setup supports ES6 modules thanks to Babel. While you can still use `require()` and `module.exports`, we encourage you to use [`import` and `export`](http://exploringjs.com/es6/ch_modules.html) instead. For example: ### `Button.js` ```js import React, { Component } from 'react'; class Button extends Component { render() { // ... } } export default Button; // Don’t forget to use export default! ``` ### `DangerButton.js` ```js import React, { Component } from 'react'; import Button from './Button'; // Import a component from another file class DangerButton extends Component { render() { return <Button color="red" />; } } export default DangerButton; ``` Be aware of the [difference between default and named exports](http://stackoverflow.com/questions/36795819/react-native-es-6-when-should-i-use-curly-braces-for-import/36796281#36796281). It is a common source of mistakes. We suggest that you stick to using default imports and exports when a module only exports a single thing (for example, a component). That’s what you get when you use `export default Button` and `import Button from './Button'`. Named exports are useful for utility modules that export several functions. A module may have at most one default export and as many named exports as you like. Learn more about ES6 modules: * [When to use the curly braces?](http://stackoverflow.com/questions/36795819/react-native-es-6-when-should-i-use-curly-braces-for-import/36796281#36796281) * [Exploring ES6: Modules](http://exploringjs.com/es6/ch_modules.html) * [Understanding ES6: Modules](https://leanpub.com/understandinges6/read#leanpub-auto-encapsulating-code-with-modules) ## Adding a Stylesheet This project setup uses [Webpack](https://webpack.github.io/) for handling all assets. Webpack offers a custom way of “extending” the concept of `import` beyond JavaScript. To express that a JavaScript file depends on a CSS file, you need to **import the CSS from the JavaScript file**: ### `Button.css` ```css .Button { padding: 20px; } ``` ### `Button.js` ```js import React, { Component } from 'react'; import './Button.css'; // Tell Webpack that Button.js uses these styles class Button extends Component { render() { // You can use them as regular CSS styles return <div className="Button" />; } } ``` **This is not required for React** but many people find this feature convenient. You can read about the benefits of this approach [here](https://medium.com/seek-ui-engineering/block-element-modifying-your-javascript-components-d7f99fcab52b). However you should be aware that this makes your code less portable to other build tools and environments than Webpack. In development, expressing dependencies this way allows your styles to be reloaded on the fly as you edit them. In production, all CSS files will be concatenated into a single minified `.css` file in the build output. If you are concerned about using Webpack-specific semantics, you can put all your CSS right into `src/index.css`. It would still be imported from `src/index.js`, but you could always remove that import if you later migrate to a different build tool. ## Post-Processing CSS This project setup minifies your CSS and adds vendor prefixes to it automatically through [Autoprefixer](https://github.com/postcss/autoprefixer) so you don’t need to worry about it. For example, this: ```css .App { display: flex; flex-direction: row; align-items: center; } ``` becomes this: ```css .App { display: -webkit-box; display: -ms-flexbox; display: flex; -webkit-box-orient: horizontal; -webkit-box-direction: normal; -ms-flex-direction: row; flex-direction: row; -webkit-box-align: center; -ms-flex-align: center; align-items: center; } ``` There is currently no support for preprocessors such as Less, or for sharing variables across CSS files. ## Adding Images and Fonts With Webpack, using static assets like images and fonts works similarly to CSS. You can **`import` an image right in a JavaScript module**. This tells Webpack to include that image in the bundle. Unlike CSS imports, importing an image or a font gives you a string value. This value is the final image path you can reference in your code. Here is an example: ```js import React from 'react'; import logo from './logo.png'; // Tell Webpack this JS file uses this image console.log(logo); // /logo.84287d09.png function Header() { // Import result is the URL of your image return <img src={logo} alt="Logo" />; } export default function Header; ``` This works in CSS too: ```css .Logo { background-image: url(./logo.png); } ``` Webpack finds all relative module references in CSS (they start with `./`) and replaces them with the final paths from the compiled bundle. If you make a typo or accidentally delete an important file, you will see a compilation error, just like when you import a non-existent JavaScript module. The final filenames in the compiled bundle are generated by Webpack from content hashes. If the file content changes in the future, Webpack will give it a different name in production so you don’t need to worry about long-term caching of assets. Please be advised that this is also a custom feature of Webpack. **It is not required for React** but many people enjoy it (and React Native uses a similar mechanism for images). However it may not be portable to some other environments, such as Node.js and Browserify. If you prefer to reference static assets in a more traditional way outside the module system, please let us know [in this issue](https://github.com/facebookincubator/create-react-app/issues/28), and we will consider support for this. ## Adding Bootstrap You don’t have to use [React Bootstrap](https://react-bootstrap.github.io) together with React but it is a popular library for integrating Bootstrap with React apps. If you need it, you can integrate it with Create React App by following these steps: Install React Bootstrap and Bootstrap from NPM. React Bootstrap does not include Bootstrap CSS so this needs to be installed as well: ``` npm install react-bootstrap --save npm install bootstrap@3 --save ``` Import Bootstrap CSS and optionally Bootstrap theme CSS in the ```src/index.js``` file: ```js import 'bootstrap/dist/css/bootstrap.css'; import 'bootstrap/dist/css/bootstrap-theme.css'; ``` Import required React Bootstrap components within ```src/App.js``` file or your custom component files: ```js import { Navbar, Jumbotron, Button } from 'react-bootstrap'; ``` Now you are ready to use the imported React Bootstrap components within your component hierarchy defined in the render method. Here is an example [`App.js`](https://gist.githubusercontent.com/gaearon/85d8c067f6af1e56277c82d19fd4da7b/raw/6158dd991b67284e9fc8d70b9d973efe87659d72/App.js) redone using React Bootstrap. ## Adding Flow Flow typing is currently [not supported out of the box](https://github.com/facebookincubator/create-react-app/issues/72) with the default `.flowconfig` generated by Flow. If you run it, you might get errors like this: ```js node_modules/fbjs/lib/Deferred.js.flow:60 60: Promise.prototype.done.apply(this._promise, arguments); ^^^^ property `done`. Property not found in 495: declare class Promise<+R> { ^ Promise. See lib: /private/tmp/flow/flowlib_34952d31/core.js:495 node_modules/fbjs/lib/shallowEqual.js.flow:29 29: return x !== 0 || 1 / (x: $FlowIssue) === 1 / (y: $FlowIssue); ^^^^^^^^^^ identifier `$FlowIssue`. Could not resolve name src/App.js:3 3: import logo from './logo.svg'; ^^^^^^^^^^^^ ./logo.svg. Required module not found src/App.js:4 4: import './App.css'; ^^^^^^^^^^^ ./App.css. Required module not found src/index.js:5 5: import './index.css'; ^^^^^^^^^^^^^ ./index.css. Required module not found ``` To fix this, change your `.flowconfig` to look like this: ```ini [libs] ./node_modules/fbjs/flow/lib [options] esproposal.class_static_fields=enable esproposal.class_instance_fields=enable module.name_mapper='^\(.*\)\.css$' -> 'react-scripts/config/flow/css' module.name_mapper='^\(.*\)\.\(jpg\|png\|gif\|eot\|otf\|webp\|svg\|ttf\|woff\|woff2\|mp4\|webm\)$' -> 'react-scripts/config/flow/file' suppress_type=$FlowIssue suppress_type=$FlowFixMe ``` Re-run flow, and you shouldn’t get any extra issues. If you later `eject`, you’ll need to replace `react-scripts` references with the `<PROJECT_ROOT>` placeholder, for example: ```ini module.name_mapper='^\(.*\)\.css$' -> '<PROJECT_ROOT>/config/flow/css' module.name_mapper='^\(.*\)\.\(jpg\|png\|gif\|eot\|otf\|webp\|svg\|ttf\|woff\|woff2\|mp4\|webm\)$' -> '<PROJECT_ROOT>/config/flow/file' ``` We will consider integrating more tightly with Flow in the future so that you don’t have to do this. ## Adding Custom Environment Variables >Note: this feature is available with `[email protected]` and higher. Your project can consume variables declared in your environment as if they were declared locally in your JS files. By default you will have `NODE_ENV` defined for you, and any other environment variables starting with `REACT_APP_`. These environment variables will be defined for you on `process.env`. For example, having an environment variable named `REACT_APP_SECRET_CODE` will be exposed in your JS as `process.env.REACT_APP_SECRET_CODE`, in addition to `process.env.NODE_ENV`. These environment variables can be useful for displaying information conditionally based on where the project is deployed or consuming sensitive data that lives outside of version control. First, you need to have environment variables defined, which can vary between OSes. For example, let's say you wanted to consume a secret defined in the environment inside a `<form>`: ```jsx render() { return ( <div> <small>You are running this application in <b>{process.env.NODE_ENV}</b> mode.</small> <form> <input type="hidden" defaultValue={process.env.REACT_APP_SECRET_CODE} /> </form> </div> ); } ``` The above form is looking for a variable called `REACT_APP_SECRET_CODE` from the environment. In order to consume this value, we need to have it defined in the environment: ### Windows (cmd.exe) ```cmd set REACT_APP_SECRET_CODE=abcdef&&npm start ``` (Note: the lack of whitespace is intentional.) ### Linux, OS X (Bash) ```bash REACT_APP_SECRET_CODE=abcdef npm start ``` > Note: Defining environment variables in this manner is temporary for the life of the shell session. Setting permanent environment variables is outside the scope of these docs. With our environment variable defined, we start the app and consume the values. Remember that the `NODE_ENV` variable will be set for you automatically. When you load the app in the browser and inspect the `<input>`, you will see its value set to `abcdef`, and the bold text will show the environment provided when using `npm start`: ```html <div> <small>You are running this application in <b>development</b> mode.</small> <form> <input type="hidden" value="abcdef" /> </form> </div> ``` Having access to the `NODE_ENV` is also useful for performing actions conditionally: ```js if (process.env.NODE_ENV !== 'production') { analytics.disable(); } ``` ## Integrating with a Node Backend Check out [this tutorial](https://www.fullstackreact.com/articles/using-create-react-app-with-a-server/) for instructions on integrating an app with a Node backend running on another port, and using `fetch()` to access it. You can find the companion GitHub repository [here](https://github.com/fullstackreact/food-lookup-demo). ## Proxying API Requests in Development >Note: this feature is available with `[email protected]` and higher. People often serve the front-end React app from the same host and port as their backend implementation. For example, a production setup might look like this after the app is deployed: ``` / - static server returns index.html with React app /todos - static server returns index.html with React app /api/todos - server handles any /api/* requests using the backend implementation ``` Such setup is **not** required. However, if you **do** have a setup like this, it is convenient to write requests like `fetch('/api/todos')` without worrying about redirecting them to another host or port during development. To tell the development server to proxy any unknown requests to your API server in development, add a `proxy` field to your `package.json`, for example: ```js "proxy": "http://localhost:4000", ``` This way, when you `fetch('/api/todos')` in development, the development server will recognize that it’s not a static asset, and will proxy your request to `http://localhost:4000/api/todos` as a fallback. Conveniently, this avoids [CORS issues](http://stackoverflow.com/questions/21854516/understanding-ajax-cors-and-security-considerations) and error messages like this in development: ``` Fetch API cannot load http://localhost:4000/api/todos. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:3000' is therefore not allowed access. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled. ``` Keep in mind that `proxy` only has effect in development (with `npm start`), and it is up to you to ensure that URLs like `/api/todos` point to the right thing in production. You don’t have to use the `/api` prefix. Any unrecognized request will be redirected to the specified `proxy`. Currently the `proxy` option only handles HTTP requests, and it won’t proxy WebSocket connections. If the `proxy` option is **not** flexible enough for you, alternatively you can: * Enable CORS on your server ([here’s how to do it for Express](http://enable-cors.org/server_expressjs.html)). * Use [environment variables](#adding-custom-environment-variables) to inject the right server host and port into your app. ## Deployment By default, Create React App produces a build assuming your app is hosted at the server root. To override this, specify the `homepage` in your `package.json`, for example: ```js "homepage": "http://mywebsite.com/relativepath", ``` This will let Create React App correctly infer the root path to use in the generated HTML file. ### Now See [this example](https://github.com/xkawi/create-react-app-now) for a zero-configuration single-command deployment with [now](https://zeit.co/now). ### Heroku Use the [Heroku Buildpack for Create React App](https://github.com/mars/create-react-app-buildpack). You can find instructions in [Deploying React with Zero Configuration](https://blog.heroku.com/deploying-react-with-zero-configuration). ### Surge Install the Surge CLI if you haven't already by running `npm install -g surge`. Run the `surge` command and log in you or create a new account. You just need to specify the *build* folder and your custom domain, and you are done. ```sh email: [email protected] password: ******** project path: /path/to/project/build size: 7 files, 1.8 MB domain: create-react-app.surge.sh upload: [====================] 100%, eta: 0.0s propagate on CDN: [====================] 100% plan: Free users: [email protected] IP Address: X.X.X.X Success! Project is published and running at create-react-app.surge.sh ``` Note that in order to support routers that use html5 `pushState` API, you may want to rename the `index.html` in your build folder to `200.html` before deploying to Surge. This [ensures that every URL falls back to that file](https://surge.sh/help/adding-a-200-page-for-client-side-routing). ### GitHub Pages >Note: this feature is available with `[email protected]` and higher. Open your `package.json` and add a `homepage` field: ```js "homepage": "http://myusername.github.io/my-app", ``` **The above step is important!** Create React App uses the `homepage` field to determine the root URL in the built HTML file. Now, whenever you run `npm run build`, you will see a cheat sheet with a sequence of commands to deploy to GitHub pages: ```sh git commit -am "Save local changes" git checkout -B gh-pages git add -f build git commit -am "Rebuild website" git filter-branch -f --prune-empty --subdirectory-filter build git push -f origin gh-pages git checkout - ``` You may copy and paste them, or put them into a custom shell script. You may also customize them for another hosting provider. Note that GitHub Pages doesn't support routers that use the HTML5 `pushState` history API under the hood (for example, React Router using `browserHistory`). This is because when there is a fresh page load for a url like `http://user.github.io/todomvc/todos/42`, where `/todos/42` is a frontend route, the GitHub Pages server returns 404 because it knows nothing of `/todos/42`. If you want to add a router to a project hosted on GitHub Pages, here are a couple of solutions: * You could switch from using HTML5 history API to routing with hashes. If you use React Router, you can switch to `hashHistory` for this effect, but the URL will be longer and more verbose (for example, `http://user.github.io/todomvc/#/todos/42?_k=yknaj`). [Read more](https://github.com/reactjs/react-router/blob/master/docs/guides/Histories.md#histories) about different history implementations in React Router. * Alternatively, you can use a trick to teach GitHub Pages to handle 404 by redirecting to your `index.html` page with a special redirect parameter. You would need to add a `404.html` file with the redirection code to the `build` folder before deploying your project, and you’ll need to add code handling the redirect parameter to `index.html`. You can find a detailed explanation of this technique [in this guide](https://github.com/rafrex/spa-github-pages). ## Something Missing? If you have ideas for more “How To” recipes that should be on this page, [let us know](https://github.com/facebookincubator/create-react-app/issues) or [contribute some!](https://github.com/facebookincubator/create-react-app/edit/master/template/README.md)

    From user rramatchandran

  • silx-kit / jupyterlab-h5web

    Github-Explorer, A JupyterLab extension to explore and visualize HDF5 file contents. Based on https://github.com/silx-kit/h5web.

    From organization silx-kit

  • spagobilabs / spagobi

    Github-Explorer, Outdated version of Knowage - Business Intelligence suite. Explore https://github.com/KnowageLabs for the current repository.

    From organization spagobilabs

  • vincezh2000 / book-bazaar

    Github-Explorer, "Book Bazaar is a Java and Spring-based platform hosted on GitHub. It is designed to facilitate the sale of secondhand books within a campus community. Explore our wide selection of pre-owned titles and find your next favorite read at an affordable price."

    From user vincezh2000

  • yuecen / github-email-explorer

    Github-Explorer, 💌 A tool to get email addresses by action types such as `starred`, `watching` or `fork` on GitHub repositories; Sending email content to those addresses with a template

    From user yuecen

  • zmh-program / code-statistic

    Github-Explorer, ⚡ Dynamically generate your github stat cards! Contains User Card, Repo Card, Contributor Card, Release Card, Issue Card and PR Card! Support dark mode, API calling, waiting for you to explore! ⚡ 动态生成你的 GitHub 统计卡片!包含用户卡片,仓库卡片,Contributor 卡片,Relase 卡片,Issue 卡片,PR 卡片!支持暗黑模式,API 调用,更多等你探索!

    From user zmh-program

    Home Page: https://stats.deeptrain.net

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.