Prosecution Insights
Last updated: April 19, 2026
Application No. 18/492,672

SYSTEMS AND ASSOCIATED METHODS FOR GENERATING A DIGITAL REPRESENTATION OF AN ENVIRONMENT

Non-Final OA §101§103
Filed
Oct 23, 2023
Examiner
BACA, MATTHEW WALTER
Art Unit
2857
Tech Center
2800 — Semiconductors & Electrical Systems
Assignee
Brightai Corporation
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
75%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
83 granted / 113 resolved
+5.5% vs TC avg
Minimal +2% lift
Without
With
+1.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
38 currently pending
Career history
151
Total Applications
across all art units

Statute-Specific Performance

§101
20.6%
-19.4% vs TC avg
§103
43.6%
+3.6% vs TC avg
§102
13.1%
-26.9% vs TC avg
§112
22.1%
-17.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 113 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Drawings The drawings are objected to for the following reasons: FIGS. 9-10, 15, and 17-19 include text that is represented in what appears to be a dot-matrix format that is unclear (not in solid black lines) and unsuitable for reproduction. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Objections Claims 2, 6, 9-10, 14, and 17 are objected to because of the following informalities: In claim 2 lines 4-5, and claim 10 lines 9-10, “the individual sensor data type comprises a RGB sensor” should read “the individual sensor type comprises a RGB sensor.” In claim 6 line 1, and claim 14 line 1, “inertia management unit” should read “inertial measurement unit.” In claim 9 line 1, and claim 17 line 1, “inertial management unit” should read “inertial measurement unit.” In claim 10 line 7, it appears that “a processor” should read “the processor” based on apparent antecedent relation to “a processor” in line 4. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 2-17 are rejected under 35 U.S.C. 101 because the claimed invention in each of these claims is directed to the abstract idea judicial exception without significantly more. Independent claim 10, substantially representative also of claim 2, recites: “[a]n apparatus for generating a digital representation of an environment, comprising: an input/output interface; a processor in communication with the input/output interface; and a memory storing instructions in communication with processor that, when the instructions are executed by the processor, cause the apparatus to: receive by a processor, a first series of individual sensor data from a first sensor data type deployed on a robot traversing an environment, wherein the first series is a function of the distance traversed by the robot, wherein the individual sensor data type comprises a RGB sensor, wherein the first series comprises data relating to the environment; receive, by the processor, a second series of individual sensor data from a second sensor type deployed on the robot traversing the environment, wherein the second series is a function of the distance traversed by the robot, wherein the second sensor type comprises a light detection and ranging sensor, wherein the light detection and ranging sensor is configured to generate a three-dimensional point cloud; fuse together, by the processor, the first series and the three-dimensional point cloud to create a three-dimensional image of the environment; and generate, by the processor, a digital representation of the environment based on the fusing step.” The claim limitations considered to fall within in the abstract idea are highlighted in bold font above and the remaining features are “additional elements.” Step 1 of the subject matter eligibility analysis entails determining whether the claimed subject matter falls within one of the four statutory categories of patentable subject matter identified by 35 U.S.C. 101: process, machine, manufacture, or composition of matter. Claim 10 an apparatus and claim 2 recites a method and each therefore falls within a statutory category. Step 2A, Prong One of the analysis entails determining whether the claim recites a judicial exception such as an abstract idea. Under a broadest reasonable interpretation, the highlighted portions of claim 10 fall within the abstract idea judicial exception. Specifically, under the 2019 Revised Patent Subject Matter Eligibility Guidance, the highlighted subject matter falls within the mathematical concepts category (mathematical relationships, mathematical formulas or equations, mathematical calculations). MPEP § 2106.04(a)(2). The recited function “fuse together” “the first series and the three-dimensional point cloud to create a three-dimensional image of the environment,” is determined by the Examiner as falling within the mathematical relationships sub-category of mathematical concepts (MPEP 2106.04(a)(2)) because fusing RGB imaging data (two-dimensional data as described in Applicant’s specification) with three-dimensional point cloud data is fundamentally characterized by mathematical relations and calculations in terms, for example, of alignment of the respective two-dimensional and three-dimension coordinates and in terms of associated methods described in Applicant’s specification for effectuating such fusing (e.g., localization via SLAM, which is fundamentally characterized by mathematical relations/calculations). Step 2A, Prong Two of the analysis entails determining whether the claim includes additional elements that integrate the recited judicial exception into a practical application. “A claim that integrates a judicial exception into a practical application will apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception” (MPEP § 2106.04(d)). MPEP § 2106.04(d) sets forth considerations to be applied in Step 2A, Prong Two for determining whether or not a claim integrates a judicial exception into a practical application. Based on the individual and collective limitations of claim 10 and applying a broadest reasonable interpretation, the most applicable of such considerations appear to include: improvements to the functioning of a computer, or to any other technology or technical field (MPEP 2106.05(a)); applying the judicial exception with, or by use of, a particular machine (MPEP 2106.05(b)); and effecting a transformation or reduction of a particular article to a different state or thing (MPEP 2106.05(c)). Regarding improvements to the functioning of a computer or other technology, none of the “additional elements” including “an input/output interface,” “a processor in communication with the input/output interface,” “a memory storing instructions in communication with processor that, when the instructions are executed by the processor” perform the recited receiving, fusing, and generating functions, “receive by a processor, a first series of individual sensor data from a first sensor data type deployed on a robot traversing an environment, wherein the first series is a function of the distance traversed by the robot, wherein the individual sensor data type comprises a RGB sensor, wherein the first series comprises data relating to the environment,” “receive, by the processor, a second series of individual sensor data from a second sensor type deployed on the robot traversing the environment, wherein the second series is a function of the distance traversed by the robot, wherein the second sensor type comprises a light detection and ranging sensor, wherein the light detection and ranging sensor is configured to generate a three-dimensional point cloud,” and “generate, by the processor, a digital representation of the environment based on the fusing step” in any combination appear to integrate the abstract idea in a manner that technologically improves any aspect of a device or system that may be used to implement the highlighted step or a device for implementing the highlighted step such as a signal processing device or a generic computer. For example, the structural features including the I/O interface, processor, and memory represent standard data processing functionality for implementing the fusing step that falls within the judicial exception and therefore constitute extra solution activity that fails to integrate the judicial exception into a practical application. The two “receiving” steps, individually and in combination, only convey that particular types of information are received, as the apparatus recited in claim 10 does not positively recite that the apparatus includes an RGB sensor and/or a point cloud sensor, such that both of these steps, individually and in combination, represent high level data collection that constitutes extra solution activity that fails to integrate the judicial exception into a practical application. The step of generating a digital representation of the environment based on the fusing step represents insignificant post-solution activity in terms of merely outputting the result obtained via the step falling within the judicial exception. Regarding application of the judicial exception with, or by use of, a particular machine, the additional elements are recited broadly as being configured and implemented in a manner reflective of mere data gathering rather than in a particularized manner of implementing multi-sensor imaging. Regarding a transformation or reduction of a particular article to a different state or thing, claim 10 does not include any such transformation or reduction. Instead, claim 10 as a whole entails receiving input information (RGB and LiDAR data that is measured independent of the scope of the claim), applying standard processing techniques (standard computer processor) and mathematics (processing entailed in fusing 2D and 3D image data) to combine the information in an output image with the additional elements failing to provide a meaningful integration of the abstract idea in an application that transforms an article to a different state. Instead, the additional elements represent extra-solution activity that does not integrate the judicial exception into a practical application. In view of the various considerations encompassed by the Step 2A, Prong Two analysis, claim 10 does not include additional elements that integrate the recited abstract idea into a practical application. Therefore, claim 10 is directed to a judicial exception and requires further analysis under Step 2B. Regarding Step 2B, and as explained in the Step 2A Prong Two analysis, the additional elements constitute extra-solution activity and therefore fail to result in the claim as a whole amounting to significantly more than the judicial exception as well as failing to integrate the judicial exception into a practical application. Furthermore, the additional elements in claim 10 appear to be generic and well understood as evidenced by the disclosures of Ligocki et al., "Atlas Fusion - Modern Framework for Autonomous Agent Sensor Data Fusion," 2022 ELEKTRO (ELEKTRO), Krakow, Poland, 2022, pp. 1-6 (Ligocki), Kueny (US 2019/0285555 A1), and X. Xu, L. Zhang, J. Yang, C. Cao, Z. Tan and M. Luo, "Object Detection Based on Fusion of Sparse Point Cloud and Image Information," in IEEE Transactions on Instrumentation and Measurement, vol. 70, pp. 1-12, 2021 (Xu), which teach substantially similar data processing functions using the same structural features. As explained in the grounds for rejecting claim 10 under 103, Ligocki teaches “a processor in communication with the input/output interface,” “a memory storing instructions in communication with processor that, when the instructions are executed by the processor” perform the recited receiving, fusing, and generating functions, “receive by a processor, a first series of individual sensor data from a first sensor data type deployed on a robot traversing an environment, wherein the first series is a function of the distance traversed by the robot, wherein the individual sensor data type comprises a RGB sensor, wherein the first series comprises data relating to the environment,” “receive, by the processor, a second series of individual sensor data from a second sensor type deployed on the robot traversing the environment, wherein the second series is a function of the distance traversed by the robot, wherein the second sensor type comprises a light detection and ranging sensor, wherein the light detection and ranging sensor is configured to generate a three-dimensional point cloud,” and “generate, by the processor, a digital representation of the environment based on the fusing step.” Similarly, Kueny discloses an apparatus for implementing multi-sensor inspection that includes I/O interfacing, processor, and memory for processing sensed imaging data (FIG. 5 computer 500) and includes receiving multi-sensor data from a robot platform (Abstract structured laser imaging and LiDAR) to generate fused images (Abstract, FIG. 3 block 307). Xu further discloses a framework for fusing multi-sensor data (Abstract and FIG. 1) in which both LiDAR and RGB imaging data are received/processed (FIG. 1; page 2, Introduction, paragraph beginning with “In response to the above problems…” explaining the camera image data is RGB data). Therefore, the additional elements are insufficient to amount to significantly more than the judicial exception. Independent claim 10 is therefore not patent eligible under 101. Claim 2 includes the same “fusing” element falling within the judicial exception as claim 10 and includes no further significant additional elements that either integrate the judicial exception into a practical application or result in the claim as a whole amounting to significantly more than the judicial exception. Therefore, claim 2 is also not patent eligible under 101. Claims 3-9 depending from claim 2, and claims 11-17 depending from claim 10, provide additional features/steps that are part of an expanded algorithm that includes the abstract idea of claims 2 and 10 (Step 2A, Prong One). None of dependent claims 3-9 and 11-17 recite additional elements that integrate the abstract idea into practical application (Step 2A, Prong Two), and all fail the “significantly more” test under the step 2B for substantially similar reasons as discussed with regards to the independent claims. For example, claim 3 substantially representative also of claim 11, recites a function of detecting features of the environment in real time, which is found by the Examiner to fall within the mental processes judicial exception (including an observation, evaluation, judgment, opinion). MPEP § 2106.04(a)(2). Detecting features of an environment in real time may be performed via mental processes such as evaluation of sensor data (e.g., real-time camera video) and judgement in ascertaining features revealed by the data in real-time. Claim 3 further recites that the detection is performed by a machine learning model. which represents routine, conventional program instruction implementation of the step falling within the judicial exception and therefore constitutes extra solution activity that neither integrates the judicial exception into a practical application nor results in the claim as a whole amounting to significantly more than the judicial exception. Claim 3 further recites that the machine learning model is deployed in the processor that is attached to a robot, which is a structural feature related to data collection and having no particularized functional relation to the steps falling within the judicial exception and therefore constitutes extra solution activity that neither integrates the judicial exception into a practical application nor results in the claim as a whole amounting to significantly more than the judicial exception. Claims 4, 7, 12, and 15 recite mapping the first and second series to a robot position relative to the environment, which falls within the mental processes exception because mapping RGB and LiDAR imaging data to data indicating robot position may be performed via mental processes (e.g., evaluation possibly aided by pen-and-paper and judgement). This element is also found to fall within the mathematical concepts exception because as disclosed in Applicant’s specification such mapping may be implemented via SLAM, which is fundamentally characterized by mathematical relations/calculations. The characterization that the digital representation is further based on the mapping represents insignificant post-solution activity similar to the “generating” step in claims 2 and 10. Claims 5-6, 9, 13-14, and 17 recite additional sensor types (infrared for claims 5 and 13, and IMU for claims 6, 9, 14 and 17) for detecting information relevant to multi-sensor inspection. These elements represent high-level data collection typical of imaging inspection systems and having no particularized functional relation to the steps falling within the judicial exception and therefore constitute extra solution activity that neither integrates the judicial exception into a practical application nor results in the claim as a whole amounting to significantly more than the judicial exception. Claim 8, substantially representative also of claim 16, further recites tracking a distance moved by the robot within the environment, which falls within the mental processes judicial exception because it may be performed via mental processes (e.g., evaluation of information such as position, speed, etc., possible aided by pen-and-paper and judgement). Claim 8 further recites that the processor for performing the tracking is attached to the robot and that the tracking is performed using data received by a motor encoder associated with a wheel, which represents high-level data collection and therefore constitutes extra solution activity that neither integrates the judicial exception into a practical application nor results in the claim as a whole amounting to significantly more than the judicial exception. Examiner notes that the apparatus/method recited in claims 8 and 16 does not positively incorporate the motor encoder as a functional entity, instead the motor encoder is merely a source of information processed by the recited method/apparatus. Dependent claims 3-9 and 11-17 therefore also constitute ineligible subject matter under 101. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2-7 and 10-15 are rejected under 35 U.S.C. 103 as being unpatentable over Ligocki et al., "Atlas Fusion - Modern Framework for Autonomous Agent Sensor Data Fusion," 2022 ELEKTRO (ELEKTRO), Krakow, Poland, 2022, pp. 1-6 (Ligocki), in view of Kueny (US 2019/0285555 A1). As to claim 2, Ligocki teaches “[a] method for generating a digital representation of an environment (Abstract describing framework including autonomous robot and imaging sensors for implementing method that includes fusing multi-sensor data including RGB and 3D LiDAR data (digital) for environmental visualization that, per page 2 C. Algorithms, D. Local Maps, and E. Visualizers, is generated via computer processing (is a digital representation)), comprising: “receiving by a processor (pages 1-2, B. Core Pipeline, paragraph beginning with “At the startup …” through paragraph beginning with “The entire pipeline …” describing processing/program “pipeline” including loading data into memory (entails processor-based information processing); page 2, C. Algorithms, paragraph beginning with “The ‘Algorithms’ module …” further characterizing the method as implemented via data processing code (implemented via processor)), a first series of individual sensor data from a first sensor type (page 1, A. Input Data, paragraph beginning with “The data stored as …” describing RGB camera data as input data; pages 3-4, C. Camera-LiDAR Object Detection, paragraph beginning with “LiDAR can measure…” through paragraph beginning with “For this purpose, we have …” describing processing of RGB image data) deployed” [with respect to] “a robot traversing an environment (Abstract describing framework including multi-sensor configuration including RGB camera deployed with respect to an autonomous robot; page 2, C. Outputs, paragraph beginning with “Secondary, there are several …” describing that the agent (robot) travels during a “mapping session” that includes collection of RGB camera data (the RGB data is collected in association with agent travel such that the camera would be deployed on the agent robot)), wherein the first series is a function of the distance traversed by the robot (the RGB camera data would reflect the environment over the path traversed during a “mapping session” including detected features per pages 3-4, C. Camera-LiDAR Object Detection, paragraph beginning with “LiDAR can measure …”), wherein the individual sensor data type comprises a RGB sensor (Abstract; page 1, A. Input Data, paragraph beginning with “The data stored as …” describing RGB camera data as input data; pages 3-4, C. Camera-LiDAR Object Detection, paragraph beginning with “LiDAR can measure…” through paragraph beginning with “For this purpose, we have …” describing processing of RGB camera image data), wherein the first series comprises data relating to the environment (FIG. 1 depicting example environment captured by combination of LiDAR and RGB imaging; page 2, C. Outputs, paragraph beginning with “Secondary, there are several …” describing the RGB data as collected over a places (environment) that the agent travels); receiving, by the processor, a second series of individual sensor data from a second sensor type (Abstract and page 1, A. Input Data, paragraph beginning with “The data stored as …” describing LiDAR scan data as input data for data fusion; page 3, B. LiDAR data aggregation, paragraphs beginning with “As we are using …” and “The input LiDAR data …” describing obtaining LiDAR data) deployed” [with respect to] “the robot traversing the environment (Abstract describing framework including multi-sensor configuration including 3D LiDAR deployed with respect to an autonomous robot; page 3, B. LiDAR data aggregation, paragraph beginning with “As we are using …” and “The input LiDAR data could come …” describing the scanning function performed during and affected by robot re-positioning (clearly inferring that the scanners are mounted to the robot)), wherein the second series is a function of distance traversed by the robot (the LiDAR data would reflect the environment over the path traversed during agent repositioning), wherein the second sensor type comprises a light detection and ranging sensor (Abstract and page 1, A. Input Data, paragraph beginning with “The data stored as …” describing LiDAR scan data as input data for data fusion; page 3, B. LiDAR data aggregation, paragraphs beginning with “As we are using …” and “The input LiDAR data …” describing obtaining LiDAR data), wherein the light detection and ranging sensor is configured to generate a three-dimensional point cloud (Abstract disclosing the multi-sensor configuration includes 3D LiDARs (inherently generates 3D point cloud); page 2, F. Data Writers, paragraph beginning with “The Data Writer section …” and page 3, B. LiDAR data aggregation, paragraph beginning with “All these three information …” and FIG. 3 describing and depicting the 3D LiDAR data as point cloud data); fusing together, by the processor, the first series and the three-dimensional point cloud (Abstract describing fusion of RGB cameras and 3D LiDAR; page 1, I. Introduction, paragraph beginning with “As a result, our team …” describing fusion of various sensor types (as depicted in FIG. 1); FIG. 1 depicting visualization that combines (fuses) RGB image data and LiDAR data; page 4, C. Camera-LiDAR Object Detection, paragraphs beginning with “For this purpose, we have created …” explaining that the system fuses the LiDAR data and camera detections into a single representation, and paragraph beginning with “There is an estimated …” describing the RGB as the camera source and describing the detected object aspect of the RGB data (frustum color coded as shown in FIG. 1)) to create a three-dimensional image of the environment (FIG. 1 depicting a height/width/depth perspective (three-dimensional) environment image fusing LiDAR and RGB image data; FIG. 6 depicting depth images combining point cloud data and thermal image data (FIG. 6 caption explains same combined imaging may be performed using RGB image data)); and generating, by the processor, a digital representation of the environment based on the fusing step (FIGS. 1 and 6 are generated by computer processing (per page 2, C. Algorithms; and E. Visualizers) based on digital data (RGB camera and LiDAR) and therefore constitute digital representations). As set forth above, Ligocki strongly suggests but does not expressly teach that the RGB camera and 3D LiDAR sensors are mounted “on” the robot. Kueny discloses a method for performing multi-sensor inspection using a multi-sensor robot (Abstract) that includes using a robot on which multiple imaging sensors are deployed ([0004] and [0023] multi-sensor inspection robot includes a variety of multiple sensor types including 3D LIDAR and optical imaging sensors such as cameras; [0027] and [0029] sensor data may include LIDAR (point cloud) and laser scan (image) obtained by LiDAR unit and laser scanner). It would have been obvious to one of ordinary skill in the art before the effective filing date, to have applied Kueny’s teaching of deploying multiple imaging sensors onto a mobile robot including LIDAR and optical imaging (e.g., cameras) for implementing multi-sensing inspection to the method taught by Ligocki, such that in combination the method includes deploying the “first sensor type” and “second sensor type” on the robot traversing the environment. The motivation would have been to effectuate multi-sensor surveying/inspection of an environment as the robot travels through the environment as disclosed by Kueny. As to claim 3, the combination of Ligocki and Kueny teaches “[t]he method of claim 2, further comprising a machine learning model deployed in the processor (Ligocki: pages 3-4, C. Camera-LiDAR Objection Detection, paragraph beginning with “LiDAR can measure …” describing use of neural network (inherently executed via a processor)),” “wherein the machine learning model is configured to detect features of the environment in real time (Ligocki: pages 3-4, C. Camera-LiDAR Objection Detection, paragraph beginning with “LiDAR can measure …” describing use of neural network for object detection in real time).” Ligocki does not appear to teach that a processor for implementing the machine learning model is incorporated with the robot structure. Kueny discloses a method for performing multi-sensor inspection using a multi-sensor robot (Abstract) in which machine learning is used for detecting features of the environment ([0030]) and in which the processing for implementing the inspection tasks is a processor attached to the robot (FIG. 5 computer 500 including CPU 510 that per [0042] configured for implementing processing functionality with respect to inspection robot 570; [0037] computer may be deployed within inspection robot 570). It would have been obvious to one of ordinary skill in the art before the effective filing date, to have applied Kueny’s teaching of using an “on-board” processor attached to the robot for performing processing function associated with the multi-sensor data collection that includes machine learning feature detection to the method taught by Ligocki as modified by Kueny to include a multi-sensor robot, such that in combination the processor that implements the machine learning detection is configured to be attached to the robot. Such a combination would amount to selecting a known design option for deploying processing functionality for a multi-sensor robot to achieve predicable results. As to claim 4, the combination of Ligocki and Kueny teaches “[t]he method of claim 3, further comprising mapping each of the first series and second series to a position of the robot relative to the environment (Ligocki: page 3, A. Precise Positioning, paragraph beginning with “Without precise positioning …” through the paragraph beginning with “As the system models …” explaining that the agent position is tracked for building the image map model that is generated, as depicted and explained with reference to FIGS. 1 and 6, based on both the 3D LiDAR data and the RGB camera data, such that the 3D LiDAR data and RGB camera data track with the agent position (e.g., RGB camera data aligned as in images in FIGS. 1 and 6 with LiDAR data that per page 3, B. LiDAR data aggregation, paragraph beginning with “The input LiDAR data …” through paragraph beginning with “We have already estimated …” is mapped to the agent’s location) wherein the digital representation of the environment is further based on the mapping step (the aligned optical imaging/LiDAR imaging results such as depicted in Ligocki FIGS. 1 and 6 are dependent on the position tracking/mapping as explained on page 3, A. Precise Positioning, paragraph beginning with “Without precise positioning …”; page 3, B. LiDAR data aggregation, paragraph beginning with “The input LiDAR data …” through paragraph beginning with “We have already estimated …” explaining the use of agent positioning information on the LiDAR scanning results (from which the final digital representation is generated)).” As to claim 5, the combination of Ligocki and Kueny teaches “[t]he method of claim 4, further comprising an infrared sensor associated with the robot (Ligocki: Abstract explaining that multi-sensor data associated with robot may include data from thermal (infrared) camera; page 1, A. Input Data, paragraph beginning with “The data are stored …” describing input data as including data from thermal camera; page 4, D. RGB YOLO Detections to IR Image, paragraph beginning with “If we focus …” through paragraph beginning with “We have proposed …” designating the thermal images as “IR images”) , the infrared sensor configured to detect temperature gradients in the environment (Ligocki: FIG. 6 (bottom) depicting thermal/IR imaging that shows thermal profiles of objects. Examiner note that thermal/IR sensors inherently implement imaging via thermal profiles that are manifested via temperature gradients).” As to claim 6, as interpreted in view of the grounds for objecting to claim 6, the combination of Ligocki and Kueny teaches “[t]he method of claim 4, further comprising an inertia management unit associated with the robot (Ligocki: Abstract explaining that multi-sensor data associated with robot may include data from a 3D IMU; page 1, A. Input Data, paragraph beginning with “The data are stored …” describing input data as including data from IMU) configured to determine a pose and position of the robot (Ligocki: FIG. 2 depicting IMU providing orientation information (roll, pitch) and linear acceleration data (position/location information) for determining position and orientation outputs).” As to claim 7, the combination of Ligocki and Kueny teaches “[t]he method of claim 2, further comprising mapping the first series and the second series to a position of the robot (Ligocki: page 3, A. Precise Positioning, paragraph beginning with “Without precise positioning …” through the paragraph beginning with “As the system models …” explaining that the agent position is tracked for building the image map model that is generated, as depicted and explained with reference to FIGS. 1 and 6, based on both the 3D LiDAR data and the RGB camera data, such that the 3D LiDAR data and RGB camera data track with the agent position (e.g., RGB camera data aligned as in images in FIGS. 1 and 6 with LiDAR data that per page 3, B. LiDAR data aggregation, paragraph beginning with “The input LiDAR data …” through paragraph beginning with “We have already estimated …” is mapped to the agent’s location), wherein the digital representation of the environment is further based on the mapping step (the aligned optical imaging/LiDAR imaging results such as depicted in Ligocki FIGS. 1 and 6 are dependent on the position tracking/mapping as explained on page 3, A. Precise Positioning, paragraph beginning with “Without precise positioning …”; page 3, B. LiDAR data aggregation, paragraph beginning with “The input LiDAR data …” through paragraph beginning with “We have already estimated …” explaining the use of agent positioning information on the LiDAR scanning results (from which the final digital representation is generated)).” As to claim 10, Ligocki teaches “[a]n apparatus for generating a digital representation of an environment (Abstract describing framework including autonomous robot and imaging sensors for implementing method that includes fusing multi-sensor data including RGB and 3D LiDAR data (digital) for environmental visualization that, per page 2 C. Algorithms, D. Local Maps, and E. Visualizers, is generated via computer processing (is a digital representation); pages 1-2, II. General Architecture Description, describing data processing structure), comprising: an input/output interface (pages 1-2, A. Input Data and B. Outputs, describing input and output interfaces for processing pipeline); a processor in communication with the input/output interface (pages 1-2, B. Core Pipeline, paragraph beginning with “At the startup …” through paragraph beginning with “The entire pipeline …” describing processing/program “pipeline” including loading data into memory (entails processor-based information processing); page 2, C. Algorithms, paragraph beginning with “The ‘Algorithms’ module …” further characterizing the method as implemented via data processing code (implemented via processor)); and a memory storing instructions in communication with processor (pages 1-2, B. Core Pipeline, paragraph beginning with “At the startup …” through paragraph beginning with “The entire pipeline …” describing processing/program “pipeline” including loading data into memory for processing; page 2, C. Algorithms, paragraph beginning with “The ‘Algorithms’ module …” further characterizing the method as implemented via data processing code (entails execution of instructions)) that, when the instructions are executed by the processor, cause the apparatus to: receive by a processor, a first series of individual sensor data from a first sensor data type (page 1, A. Input Data, paragraph beginning with “The data stored as …” describing RGB camera data as input data; pages 3-4, C. Camera-LiDAR Object Detection, paragraph beginning with “LiDAR can measure…” through paragraph beginning with “For this purpose, we have …” describing processing of RGB image data) deployed” [with respect to] “a robot traversing an environment (Abstract describing framework including multi-sensor configuration including RGB camera deployed with respect to an autonomous robot; page 2, C. Outputs, paragraph beginning with “Secondary, there are several …” describing that the agent (robot) travels during a “mapping session” that includes collection of RGB camera data (the RGB data is collected in association with agent travel such that the camera would be deployed on the agent robot)), wherein the first series is a function of the distance traversed by the robot (the RGB camera data would reflect the environment over the path traversed during a “mapping session” including detected features per pages 3-4, C. Camera-LiDAR Object Detection, paragraph beginning with “LiDAR can measure …”), wherein the individual sensor data type comprises a RGB sensor (Abstract; page 1, A. Input Data, paragraph beginning with “The data stored as …” describing RGB camera data as input data; pages 3-4, C. Camera-LiDAR Object Detection, paragraph beginning with “LiDAR can measure…” through paragraph beginning with “For this purpose, we have …” describing processing of RGB camera image data), wherein the first series comprises data relating to the environment (FIG. 1 depicting example environment captured by combination of LiDAR and RGB imaging; page 2, C. Outputs, paragraph beginning with “Secondary, there are several …” describing the RGB data as collected over a places (environment) that the agent travels); receive, by the processor, a second series of individual sensor data from a second sensor type (Abstract and page 1, A. Input Data, paragraph beginning with “The data stored as …” describing LiDAR scan data as input data for data fusion; page 3, B. LiDAR data aggregation, paragraphs beginning with “As we are using …” and “The input LiDAR data …” describing obtaining LiDAR data) deployed” [with respect to] “the robot traversing the environment (Abstract describing framework including multi-sensor configuration including 3D LiDAR deployed with respect to an autonomous robot; page 3, B. LiDAR data aggregation, paragraph beginning with “As we are using …” and “The input LiDAR data could come …” describing the scanning function performed during and affected by robot re-positioning (clearly inferring that the scanners are mounted to the robot)), wherein the second series is a function of the distance traversed by the robot (the LiDAR data would reflect the environment over the path traversed during agent repositioning), wherein the second sensor type comprises a light detection and ranging sensor (Abstract and page 1, A. Input Data, paragraph beginning with “The data stored as …” describing LiDAR scan data as input data for data fusion; page 3, B. LiDAR data aggregation, paragraphs beginning with “As we are using …” and “The input LiDAR data …” describing obtaining LiDAR data), wherein the light detection and ranging sensor is configured to generate a three-dimensional point cloud (Abstract disclosing the multi-sensor configuration includes 3D LiDARs (inherently generates 3D point cloud); page 2, F. Data Writers, paragraph beginning with “The Data Writer section …” and page 3, B. LiDAR data aggregation, paragraph beginning with “All these three information …” and FIG. 3 describing and depicting the 3D LiDAR data as point cloud data); fuse together, by the processor, the first series and the three-dimensional point cloud (Abstract describing fusion of RGB cameras and 3D LiDAR; page 1, I. Introduction, paragraph beginning with “As a result, our team …” describing fusion of various sensor types (as depicted in FIG. 1); FIG. 1 depicting visualization that combines (fuses) RGB image data and LiDAR data; page 4, C. Camera-LiDAR Object Detection, paragraphs beginning with “For this purpose, we have created …” explaining that the system fuses the LiDAR data and camera detections into a single representation, and paragraph beginning with “There is an estimated …” describing the RGB as the camera source and describing the detected object aspect of the RGB data (frustum color coded as shown in FIG. 1)) to create a three-dimensional image of the environment (FIG. 1 depicting a height/width/depth perspective (three-dimensional) environment image fusing LiDAR and RGB image data; FIG. 6 depicting depth images combining point cloud data and thermal image data (FIG. 6 caption explains same combined imaging may be performed using RGB image data)); and generate, by the processor, a digital representation of the environment based on the fusing step (FIGS. 1 and 6 are generated by computer processing (per page 2, C. Algorithms; and E. Visualizers) based on digital data (RGB camera and LiDAR) and therefore constitute digital representations). As set forth above, Ligocki strongly suggests but does not expressly teach that the RGB camera and 3D LiDAR sensors are mounted “on” the robot. Kueny discloses a method/system for performing multi-sensor inspection using a multi-sensor robot (Abstract) that includes using a robot on which multiple imaging sensors are deployed ([0004] and [0023] multi-sensor inspection robot includes a variety of multiple sensor types including 3D LIDAR and optical imaging sensors such as cameras; [0027] and [0029] sensor data may include LIDAR (point cloud) and laser scan (image) obtained by LiDAR unit and laser scanner). It would have been obvious to one of ordinary skill in the art before the effective filing date, to have applied Kueny’s teaching of deploying multiple imaging sensors onto a mobile robot including LIDAR and optical imaging (e.g., cameras) for implementing multi-sensing inspection to the apparatus taught by Ligocki, such that in combination the apparatus is configured for deploying the “first sensor type” and “second sensor type” on the robot traversing the environment. The motivation would have been to effectuate multi-sensor surveying/inspection of an environment as the robot travels through the environment as disclosed by Kueny. As to claim 11, the combination of Ligocki and Kueny teaches “[t]he apparatus of claim 10, further comprising a machine learning model deployed in the processor (Ligocki: pages 3-4, C. Camera-LiDAR Objection Detection, paragraph beginning with “LiDAR can measure …” describing use of neural network (inherently executed via a processor)),” “wherein the machine learning model is configured to detect features of the environment in real time (Ligocki: pages 3-4, C. Camera-LiDAR Objection Detection, paragraph beginning with “LiDAR can measure …” describing use of neural network for object detection in real time).” Ligocki does not appear to teach that a processor for implementing the machine learning model is incorporated with the robot structure. Kueny discloses a method/system for performing multi-sensor inspection using a multi-sensor robot (Abstract) in which machine learning is used for detecting features of the environment ([0030]) and in which the processing for implementing the inspection tasks is a processor attached to the robot (FIG. 5 computer 500 including CPU 510 that per [0042] configured for implementing processing functionality with respect to inspection robot 570; [0037] computer may be deployed within inspection robot 570). It would have been obvious to one of ordinary skill in the art before the effective filing date, to have applied Kueny’s teaching of using an “on-board” processor attached to the robot for performing processing function associated with the multi-sensor data collection that includes machine learning feature detection to the apparatus taught by Ligocki as modified by Kueny to include a multi-sensor robot, such that in combination the processor that implements the machine learning detection is configured to be attached to the robot. Such a combination would amount to selecting a known design option for deploying processing functionality for a multi-sensor robot to achieve predicable results. As to claim 12, the combination of Ligocki and Kueny teaches “[t]he apparatus of claim 11, further comprising mapping each of the first series and second series to a position of the robot relative to the environment (Ligocki: page 3, A. Precise Positioning, paragraph beginning with “Without precise positioning …” through the paragraph beginning with “As the system models …” explaining that the agent position is tracked for building the image map model that is generated, as depicted and explained with reference to FIGS. 1 and 6, based on both the 3D LiDAR data and the RGB camera data, such that the 3D LiDAR data and RGB camera data track with the agent position (e.g., RGB camera data aligned as in images in FIGS. 1 and 6 with LiDAR data that per page 3, B. LiDAR data aggregation, paragraph beginning with “The input LiDAR data …” through paragraph beginning with “We have already estimated …” is mapped to the agent’s location), wherein the digital representation of the environment is further based on the mapping step (the aligned optical imaging/LiDAR imaging results such as depicted in Ligocki FIGS. 1 and 6 are dependent on the position tracking/mapping as explained on page 3, A. Precise Positioning, paragraph beginning with “Without precise positioning …”; page 3, B. LiDAR data aggregation, paragraph beginning with “The input LiDAR data …” through paragraph beginning with “We have already estimated …” explaining the use of agent positioning information on the LiDAR scanning results (from which the final digital representation is generated)). As to claim 13, the combination of Ligocki and Kueny teaches “[t]he apparatus of claim 12, further comprising an infrared sensor associated with the robot (Ligocki: Abstract explaining that multi-sensor data associated with robot may include data from thermal (infrared) camera; page 1, A. Input Data, paragraph beginning with “The data are stored …” describing input data as including data from thermal camera; page 4, D. RGB YOLO Detections to IR Image, paragraph beginning with “If we focus …” through paragraph beginning with “We have proposed …” designating the thermal images as “IR images”), the infrared sensor configured to detect temperature gradients in the environment (Ligocki: FIG. 6 (bottom) depicting thermal/IR imaging that shows thermal profiles of objects. Examiner note that thermal/IR sensors inherently implement imaging via thermal profiles that are manifested via temperature gradients).” As to claim 14, as interpreted in view of the grounds for objecting to claim 14, the combination of Ligocki and Kueny teaches “[t]he apparatus of claim 12, further comprising an inertia management unit associated with the robot (Ligocki: Abstract explaining that multi-sensor data associated with robot may include data from a 3D IMU; page 1, A. Input Data, paragraph beginning with “The data are stored …” describing input data as including data from IMU) configured to determine a pose and position of the robot (Ligocki: FIG. 2 depicting IMU providing orientation information (roll, pitch) and linear acceleration data (position/location information) for determining position and orientation outputs).” As to claim 15, the combination of Ligocki and Kueny teaches “[t]he apparatus of claim 10, further comprising mapping the first series and the second series to a position of the robot (Ligocki: page 3, A. Precise Positioning, paragraph beginning with “Without precise positioning …” through the paragraph beginning with “As the system models …” explaining that the agent position is tracked for building the image map model that is generated, as depicted and explained with reference to FIGS. 1 and 6, based on both the 3D LiDAR data and the RGB camera data, such that the 3D LiDAR data and RGB camera data track with the agent position (e.g., RGB camera data aligned as in images in FIGS. 1 and 6 with LiDAR data that per page 3, B. LiDAR data aggregation, paragraph beginning with “The input LiDAR data …” through paragraph beginning with “We have already estimated …” is mapped to the agent’s location), wherein the digital representation of the environment is further based on the mapping step (the aligned optical imaging/LiDAR imaging results such as depicted in Ligocki FIGS. 1 and 6 are dependent on the position tracking/mapping as explained on page 3, A. Precise Positioning, paragraph beginning with “Without precise positioning …”; page 3, B. LiDAR data aggregation, paragraph beginning with “The input LiDAR data …” through paragraph beginning with “We have already estimated …” explaining the use of agent positioning information on the LiDAR scanning results (from which the final digital representation is generated)).” Claims 8-9 and 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Ligocki in view of Kueny as applied to claims 7 and 15 above, and further in view of Alzuhiri, Mohand, et al. "IMU-assisted robotic structured light sensing with featureless registration under uncertainties for pipeline inspection." NDT & E International 139 (2023), (Alzuhiri). As to claim 8, the combination of Ligocki and Kueny teaches “[t]he method of claim 7, further comprising tracking, by the processor, a distance moved by the robot within the environment (Ligocki: page 3, A. Precise Positioning, paragraph beginning with “In the beginning, the first GNSS …” describing tracking of agent movement (as depicted in FIG. 2 as tracking position) beginning with the mapping session origin. Examiner notes that such position tracking over a mapping session would entail tracking distance).” Ligocki does not appear to teach that a processor for tracking distance traveled is incorporated with the robot structure. Kueny discloses a method for performing multi-sensor inspection using a multi-sensor robot (Abstract) in which the processing for implementing the inspection tasks is a processor attached to the robot (FIG. 5 computer 500 including CPU 510 that per [0042] configured for implementing processing functionality with respect to inspection robot 570; [0037] computer may be deployed within inspection robot 570). It would have been obvious to one of ordinary skill in the art before the effective filing date, to have applied Kueny’s teaching of using an “on-board” processor attached to the robot for performing processing function associated with the multi-sensor data collection to the method taught by Ligocki as modified by Kueny, which teaches tracking positions and therefore respective robot distances as part of inspection processing, such that in combination the processor that implements the distance tracking is configured to be attached to the robot. Such a combination would amount to selecting a known design option for deploying processing functionality for a multi-sensor robot to achieve predicable results. Neither Ligocki nor Kueny appear to teach “wherein the tracking is performed using data received by a motor encoder associated with a wheel attached to the robot.” Alzuhiri discloses a method for robotic imaging for pipeline inspection (Abstract) that including using wheel odometry that translates to motor encoding for tracking robot motion and positioning (Abstract and FIG. 3 describing and depicting wheel odometry used for positioning aspect of registration and reconstruction of sensor data; page 2, Introduction, paragraph beginning with “The main information sources for global positioning …” describing encoder data used for estimating distance of robot within pipeline; page 6, 3.3 Wheel Odometry, full paragraph beginning with “Another set of sensors …” through paragraph following equation (26) explaining that a wheel odometry entails an encoder for tracking rotation that is motor-driven (motor rotation translating to wheel rotation and hence motor encoder associated with a wheel)). It would have been obvious to one of ordinary skill in the art before the effective filing date, to have applied Alzuhiri’s teaching of using a motor encoder for tracking positions/distances traveled by an inspection robot to the method taught by Ligocki as modified by Kueny such that in combination the method including using a motor encoder associated with a robot wheel in addition or as an alternative to the satellite and IMU tracking implemented by Ligocki for tracking robot position and distance. The motivation would have been to provide position/distance information such as in an enclosed space such as a pipe interior in which external positioning (e.g., satellite) may be unavailable or unreliable as suggested by Alzuhiri. Furthermore, such a combination would amount to selecting a known design option for tracking inspection robot position/distance to achieve predictable results. As to claim 9, as interpreted in view of the grounds for objecting to claim 9, the combination of Ligocki, Kueny, and Alzuhiri teaches “[t]he method of claim 8, further comprising an inertial management unit associated with the robot (Ligocki: Abstract explaining that multi-sensor data associated with robot may include data from a 3D IMU; page 1, A. Input Data, paragraph beginning with “The data are stored …” describing input data as including data from IMU) configured to determine a pose and position of the robot (Ligocki: FIG. 2 depicting IMU providing orientation information (roll, pitch) and linear acceleration data (position/location information) for determining position and orientation outputs).” As to claim 16, the combination of Ligocki and Kueny teaches “[t]he apparatus of claim 15, further comprising tracking, by the processor, a distance moved by the robot within the environment (Ligocki: page 3, A. Precise Positioning, paragraph beginning with “In the beginning, the first GNSS …” describing tracking of agent movement (as depicted in FIG. 2 as tracking position) beginning with the mapping session origin. Examiner notes that such position tracking over a mapping session would entail tracking distance).” Ligocki does not appear to teach that a processor for tracking distance traveled is incorporated with the robot structure. Kueny discloses a method/system for performing multi-sensor inspection using a multi-sensor robot (Abstract) in which the processing for implementing the inspection tasks is a processor attached to the robot (FIG. 5 computer 500 including CPU 510 that per [0042] configured for implementing processing functionality with respect to inspection robot 570; [0037] computer may be deployed within inspection robot 570). It would have been obvious to one of ordinary skill in the art before the effective filing date, to have applied Kueny’s teaching of using an “on-board” processor attached to the robot for performing processing function associated with the multi-sensor data collection to the apparatus taught by Ligocki as modified by Kueny, which teaches tracking positions and therefore respective robot distances as part of inspection processing, such that in combination the processor that implements the distance tracking is configured to be attached to the robot. Such a combination would amount to selecting a known design option for deploying processing functionality for a multi-sensor robot to achieve predicable results. Neither Ligocki nor Kueny appear to teach “wherein the tracking is performed using data received by a motor encoder associated with a wheel attached to the robot.” Alzuhiri discloses a method for robotic imaging for pipeline inspection (Abstract) that including using wheel odometry that translates to motor encoding for tracking robot motion and positioning (Abstract and FIG. 3 describing and depicting wheel odometry used for positioning aspect of registration and reconstruction of sensor data; page 2, Introduction, paragraph beginning with “The main information sources for global positioning …” describing encoder data used for estimating distance of robot within pipeline; page 6, 3.3 Wheel Odometry, full paragraph beginning with “Another set of sensors …” through paragraph following equation (26) explaining that a wheel odometry entails an encoder for tracking rotation that is motor-driven (motor rotation translating to wheel rotation and hence motor encoder associated with a wheel)). It would have been obvious to one of ordinary skill in the art before the effective filing date, to have applied Alzuhiri’s teaching of using a motor encoder for tracking positions/distances traveled by an inspection robot to the apparatus taught by Ligocki as modified by Kueny such that in combination the apparatus is configured for using a motor encoder associated with a robot wheel in addition or as an alternative to the satellite and IMU tracking implemented by Ligocki for tracking robot position and distance. The motivation would have been to provide position/distance information such as in an enclosed space such as a pipe interior in which external positioning (e.g., satellite) may be unavailable or unreliable as suggested by Alzuhiri. Furthermore, such a combination would amount to selecting a known design option for tracking inspection robot position/distance to achieve predictable results. As to claim 17, the combination of Ligocki, Kueny, and Alzuhiri teaches “[t]he apparatus of claim 16, further comprising an inertial management unit associated with the robot (Ligocki: Abstract explaining that multi-sensor data associated with robot may include data from a 3D IMU; page 1, A. Input Data, paragraph beginning with “The data are stored …” describing input data as including data from IMU) configured to determine a pose and position of the robot (Ligocki: FIG. 2 depicting IMU providing orientation information (roll, pitch) and linear acceleration data (position/location information) for determining position and orientation outputs).” Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATTHEW W BACA whose telephone number is (571)272-2507. The examiner can normally be reached Monday - Friday 8:00 am - 5:30 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Schechter can be reached at (571) 272-2302. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MATTHEW W. BACA/Examiner, Art Unit 2857 /ANDREW SCHECHTER/Supervisory Patent Examiner, Art Unit 2857
Read full office action

Prosecution Timeline

Oct 23, 2023
Application Filed
Nov 25, 2024
Response after Non-Final Action
Feb 13, 2026
Non-Final Rejection — §101, §103
Mar 20, 2026
Interview Requested
Mar 27, 2026
Applicant Interview (Telephonic)
Mar 27, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601701
MULTI-FREQUENCY SENSING SYSTEM AND METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12585038
METHOD FOR OPERATING A METAL DETECTOR AND METAL DETECTOR
2y 5m to grant Granted Mar 24, 2026
Patent 12551192
ULTRASONIC DIAGNOSTIC APPARATUS AND MEDICAL IMAGE PROCESSING APPARATUS
2y 5m to grant Granted Feb 17, 2026
Patent 12504371
SYSTEM AND METHOD OF DYNAMIC MICRO-OPTICAL COHERENCE TOMOGRAPHY FOR MAPPING CELLULAR FUNCTIONS
2y 5m to grant Granted Dec 23, 2025
Patent 12493093
REDUCTION OF OFF-RESONANCE EFFECTS IN MAGNETIC RESONANCE IMAGING
2y 5m to grant Granted Dec 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
75%
With Interview (+1.9%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 113 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month