Prosecution Insights
Last updated: April 19, 2026
Application No. 18/295,650

LIDAR POINT CLOUD DATA ALIGNMENT WITH CAMERA PIXELS

Non-Final OA §103
Filed
Apr 04, 2023
Examiner
ROBERTS, RACHEL L
Art Unit
2674
Tech Center
2600 — Communications
Assignee
Avanti R&D, Inc.
OA Round
1 (Non-Final)
90%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 90% — above average
90%
Career Allow Rate
17 granted / 19 resolved
+27.5% vs TC avg
Moderate +14% lift
Without
With
+14.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
35 currently pending
Career history
54
Total Applications
across all art units

Statute-Specific Performance

§101
12.1%
-27.9% vs TC avg
§103
65.1%
+25.1% vs TC avg
§102
7.9%
-32.1% vs TC avg
§112
12.1%
-27.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 19 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Priority Applicant claims the benefit of US Provisional Application No. 63/330,609, filed 04/13/2022. Claims 1-9 have been afforded the benefit of this filing date. Information Disclosure Statement The IDS dated 04/04/2023 ha s been considered and placed in the application file. Claim Interpretation The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification. Under MPEP 2143.03, "All words in a claim must be considered in judging the patentability of that claim against the prior art." In re Wilson, 424 F.2d 1382, 1385, 165 USPQ 494, 496 (CCPA 1970). As a general matter, the grammar and ordinary meaning of terms as understood by one having ordinary skill in the art used in a claim will dictate whether, and to what extent, the language limits the claim scope. Language that suggests or makes a feature or step optional but does not require that feature or step does not limit the scope of a claim under the broadest reasonable claim interpretation. In addition, when a claim requires selection of an element from a list of alternatives , the prior art teaches the element if one of the alternatives is taught by the prior art. See, e.g., Fresenius USA, Inc. v. Baxter Int’l, Inc., 582 F.3d 1288, 1298, 92 USPQ2d 1163, 1171 (Fed. Cir. 2009). Claim 1 recite “or ” then listing “ identifying a moving object in the first buffer memory or a moving blob in the second buffer memory ”. Since “or” is disjunctive, any one of the elements found in the prior art is sufficient to reject the claim . While citations have been provided for completeness and rapid prosecution, only one element is required . Because, on balance, it appears the disjunctive interpretation enjoys the most specification support and for that reason the disjunctive interpretation (one of A, B OR C) is being adopted for the purposes of this Office Action. Applicant’s comments and/or amendments relating to this issue are invited to clarify the claim language and the prosecution history. Claim 5 recite “or ” then listing “ identifying a moving object in the first buffer memory or a moving blob in the second buffer memory ”. Since “or” is disjunctive, any one of the elements found in the prior art is sufficient to reject the claim . While citations have been provided for completeness and rapid prosecution, only one element is required . Because, on balance, it appears the disjunctive interpretation enjoys the most specification support and for that reason the disjunctive interpretation (one of A, B OR C) is being adopted for the purposes of this Office Action. Applicant’s comments and/or amendments relating to this issue are invited to clarify the claim language and the prosecution history. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to p r e-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness . Claims 1- 9 are rejected under 35 U.S.C. 103 as unpatentable over Yang et al ( US Patent Publication US 2020/0021728 A1 hereafter referred to as Yang ) in view of Hicks ( US Patent No US 10,345,447 B1 hereafter referred to as Hicks ). Regarding Claim 1 , Yang teaches a computer vision apparatus (Yang ¶0040 discloses that the method operates on a standalone machine) for detecting (Yang ¶0034 discloses detecting the surroundings of the vehicle) the computer vision apparatus comprising: an optical camera sensor (Yang ¶0005 and ¶0046 discloses camera sensors) for obtaining image data of the object (Yang ¶0046 and ¶0065, ¶0067, ¶0073 discloses image data in form of images or videos captured from the camera sensors) ; a lidar sensor (Yang ¶0005 discloses lidar sensors) for obtaining point cloud data of the object (Yang ¶0068, ¶0070, discloses the lidar sensor outputting the point clouds capturing the surrounding environment); for storing (Yang ¶0048 discloses storing the sensor data) the image data (Yang ¶0046 and ¶0065, ¶0067, ¶0073 discloses image data in form of images or videos captured from the camera sensors) from the optical camera sensor (Yang ¶0005 and ¶0046 discloses camera sensors ); for storing (Yang ¶0048 discloses storing the sensor data) the point cloud data (Yang ¶0068, ¶0070, discloses the lidar sensor outputting the point clouds capturing the surrounding environment) from the lidar sensor (Yang ¶0005 discloses lidar sensors) ; a CPU (Central Processing Unit) (Yang ¶0042 discloses a CPU included in a computer system) on which computer programs run thereon (Yang ¶0042-¶0043 discloses instructions being stored to implement the methodologies) , the computer programs being arranged to control the optical camera sensor and the lidar sensor (Yang ¶0042-¶0043 discloses instructions being stored to implement the methodologies and ¶0028 discloses how the vehicle control system requests specific sensor data pertaining to a specific route) ; and a memory for storing the computer programs (Yang ¶0043 discloses instructions being stored to a memory implement the methodologies) , wherein the computer programs comprise the steps of (Yang ¶0025, ¶0042-¶0043 and Fig 15 disclose a computer machine executing instructions that implement the methodologies) : capturing image data of the object via the optical camera sensor (Yang ¶0076, ¶0070, ¶0105 discloses capturing an image via the camera for the surrounding area) ; storing the captured image data (Yang ¶0076 discloses the captured image data being stored by the camera) scanning the object using the lidar sensor (Yang ¶0034 and ¶0068 discloses the lidar sensor scanning the object and surrounding environment) ; storing scanned point cloud data (Yang ¶0048 discloses storing the sensor data) via the lidar sensor (Yang ¶0005 discloses lidar sensors) or a moving blob (Yang ¶0047 discloses identifying if the object is moving or not and ¶0087 discloses the shape of the feature in the image) to the moving object (Yang ¶0047, and ¶0065 discloses identifying and determining if an object is moving or not) position, speed (Yang ¶0034 discloses obtaining the position and speed) , and time stamp (Yang ¶0065 discloses the image captured and recorded with a timestamp) when the moving object (Yang ¶0047, and ¶0065 discloses identifying and determining if an object is moving or not) passes a certain point in a camera view of the camera sensor (Yang ¶0073 discloses a threshold distance from the sensor to trigger the sensors to act) ; and Matching (Yang ¶0073 discloses synchronizing the camera and lidar sensors) an incoming moving blob corresponding to the moving object (Yang ¶0070 discloses the lidar point cloud and image corresponding) by using the information received by the camera sensor (Yang ¶0065 discloses the image captured and recorded with a timestamp) when the incoming moving blob passes a certain point (Yang ¶0098 discloses triggering the module to control the lidar within the field of view of the camera based on a threshold distance) in a lidar view corresponding to the certain point in the camera view (Yang ¶0073 discloses a threshold distance from the sensor to trigger the sensors to act) . Yang does not explicitly disclose tracking an object within a surrounding environment, a first buffer memory, a second buffer memory, into the first buffer memory, into the second buffer memory, in the first buffer memory, in the second buffer memory, assigning an identification code (ID) in the first buffer memory, obtaining information including the ID. Hicks is in the same field of image analysis of camera and lidar point data for use in automated vehicle systems . Further , Hicks teaches tracking an object within a surrounding environment (Hicks Fig 6 268 and Col 3 Lines 25-30 discloses tracking the object in the model ), a first buffer memory (Hicks Col 19 Lines 5-15 disclose using RAM memory which is a type of temporary memory to be accessed by the processor and there being multiple memories) a second buffer memory (Hicks Col 19 Lines 5-15 disclose using RAM memory which is a type of temporary memory to be accessed by the processor and there being multiple memories) into the first buffer memory (Hicks Col 19 Lines 5-15 disclose using RAM memory which is a type of temporary memory to be accessed by the processor and there being multiple memories) into the second buffer memory (Hicks Col 19 Lines 5-15 disclose using RAM memory which is a type of temporary memory to be accessed by the processor and there being multiple memories) in the first buffer memory (Hicks Col 19 Lines 5-15 disclose using RAM memory which is a type of temporary memory to be accessed by the processor and there being multiple memories) in the second buffer memory (Hicks Col 19 Lines 5-15 disclose using RAM memory which is a type of temporary memory to be accessed by the processor and there being multiple memories); assigning an identification code (ID) (Hicks Col 14 Lines 5-10 discloses assigning identifiers to identified objects) in the first buffer memory (Hicks Col 19 Lines 5-15 disclose using RAM memory which is a type of temporary memory to be accessed by the processor and there being multiple memories); obtaining information (Hicks Col 1 Lines 45-50 discloses obtaining information of the scene ) including the ID (Hicks Col 14 Lines 5-10 discloses assigning identifiers to identified objects) . Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Yang by incorporati ng the multiple temporary memories to store the information including the identification and classification of the objects and surrounding environment, as well as the addition of the second lidar scanner and the configuration and type of lidar scanner as taught by Yang to make an invention that can automatically identify and classify objects in a broad field of view, while lessening computation time as a whole for the system ; thus one of ordinary skilled in the art would be motivated to combine the references since there is a need to identify the important object in the field of view since the whole field of view do es not usually include details that are important for navigation. In some systems, a visual light camera is used in AV systems to compensate for the slower frame rate of the lidar. The camera images are processed to select areas of interest. The lidar can be directed to perform a more detailed scan of the selected areas. . ( Hicks Col 1 Lines 50-60 ). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. Regarding Claim 2, Yang in view of Hicks teaches the computer vision apparatus of claim 1, wherein the computer programs further include the steps of (Yang ¶0025, ¶0042-¶0043 and Fig 15 disclose a computer machine executing instructions that implement the methodologies) : updating a classification (Hicks Col 7 Lines 30-40 and Col 10 Lines 4-10, and Col 9 Lines 25-30 disclose the object being classified and tracked and updated point cloud being used to model the object) of the moving object (Yang ¶0047, and ¶0065 discloses identifying and determining if an object is moving or not) obtained by the camera sensor (Hicks Col 14 Lines 10-15 disclose classifying based on the sensor data) using those of the lidar sensor when the incoming blob enters the lidar view (Yang ¶0098 discloses triggering the module to control the lidar within the field of view of the camera based on a threshold distance) and has obtained the classification of the moving blob (Hicks Col 7 Lines 30-40 and Col 10 Lines 4-10, and Col 9 Lines 25-30 disclose the object being classified and tracked and updated point cloud being used to model the object) . See rationale for Claim 1, its parent claim. Regarding Claim 3 , Yang in view of Hicks teaches t he computer vision apparatus of claim 2, wherein the lidar sensor (Yang ¶0005 discloses lidar sensors) includes a rotation mechanism (Hicks Col 16 Lines 32-40 disclose the lidar being able to rotate) . See rationale for Claim 1, its parent claim. Regarding Claim 4 , Yang in view of Hicks teaches t he computer vision apparatus of claim 2, wherein the lidar sensor (Yang ¶0005 discloses lidar sensors) includes a solid state lidar sensor (Hicks Col 17 Lines 24-30 discloses that the lidar may be a flash lidar which is a type of solid state lidar) . See rationale for Claim 1, its parent claim. Regarding Claim 5 , Yang teaches a computer vision apparatus (Yang ¶0040 discloses that the method operates on a standalone machine) for detecting (Yang ¶0034 discloses detecting the surroundings of the vehicle) an optical camera sensor (Yang ¶0005 and ¶0046 discloses camera sensors) for obtaining image data of the object (Yang ¶0046 and ¶0065, ¶0067, ¶0073 discloses image data in form of images or videos captured from the camera sensors) ; for obtaining point cloud data (Yang ¶0068, ¶0070, discloses the lidar sensor outputting the point clouds capturing the surrounding environment) for storing (Yang ¶0048 discloses storing the sensor data) the image data (Yang ¶0046 and ¶0065, ¶0067, ¶0073 discloses image data in form of images or videos captured from the camera sensors) from the optical camera sensor (Yang ¶0005 and ¶0046 discloses camera sensors) , for storing (Yang ¶0048 discloses storing the sensor data) the point cloud data (Yang ¶0068, ¶0070, discloses the lidar sensor outputting the point clouds capturing the surrounding environment) a CPU (Central Processing Unit) (Yang ¶0042 discloses a CPU included in a computer system) on which computer programs run thereon (Yang ¶0042-¶0043 discloses instructions being stored to implement the methodologies) , the computer programs being arranged to control the optical camera sensor (Yang ¶0042-¶0043 discloses instructions being stored to implement the methodologies and ¶0028 discloses how the vehicle control system requests specific sensor data pertaining to a specific route) a memory for storing data associated with the computer programs (Yang ¶0043 discloses instructions being stored to a memory implement the methodologies) , wherein the computer programs comprise the steps of (Yang ¶0025, ¶0042-¶0043 and Fig 15 disclose a computer machine executing instructions that implement the methodologies) : capturing image data of the object via the optical camera sensor (Yang ¶0076, ¶0070, ¶0105 discloses capturing an image via the camera for the surrounding area) ; storing the captured image data (Yang ¶0076 discloses the captured image data being stored by the camera) synchronizing position of objects (Yang ¶0073 discloses synchronizing the camera and lidar sensors) identified by the optical camera sensor (Yang ¶0046 and ¶0065, ¶0067, ¶0073 discloses image data in form of images or videos captured from the camera sensors) when the computer programs (Yang ¶0025, ¶0042-¶0043 and Fig 15 disclose a computer machine executing instructions that implement the methodologies) identify a moving blob or a moving object (Yang ¶0047 discloses identifying if the object is moving or not and ¶0087 discloses the shape of the feature in the image) overlaying a position of the image data onto a position of the moving blob (Yang Fig 6, ¶0015, and ¶0093 discloses the overlay of the image data and the point cloud blob) so that a remaining view area can be matched (Yang ¶0073 discloses lining up the view of both the camera and the sensor) to a remaining view area can be matched to view area of the other sensors (Yang ¶0073 discloses synchronizing the camera and lidar sensors) . Yang does not explicitly disclose tracking an object within a surrounding environment, a plurality of lidar sensors , the plurality of lidar sensors including a first lidar sensor and a second lidar senso r ; a first buffer memory , a second buffer memory from the plurality of lidar sensors and the plurality of lidar sensors into the first buffer memory; scanning the object via the first lidar sensor; scanning the object via the second lidar sensor; wherein the scanning via the second lidar sensor is phase-shifted from the scanning via the first lidar; merging cloud data from the first lidar sensor and cloud data from the second lidar sensor ; storing the merged cloud data into the second buffer memory, the plurality of lidar sensors; performing parallel pre-process data in the first buffer memory and the second buffer memory independently using independent blob detection algorithms , estimating sizes and shapes of the objects stored in the first buffer memory and the second buffer memory, in the first buffer memory and the second buffer memory. Hicks is in the same field of image analysis of camera and lidar point data for use in automated vehicle systems. Further, Hicks teaches tracking an object within a surrounding environment (Hicks Fig 6 268 and Col 3 Lines 25-30 discloses tracking the object in the model) , a plurality of lidar sensors (Hicks Col 12 Lines 55-65 disclose a first and second lidar sensor providing a plurality of sensors) the plurality of lidar sensors including a first lidar sensor and a second lidar sensor (Hicks Col 12 Lines 55-65 disclose a first and second lidar sensor providing a plurality of sensors) ; a first buffer memory (Hicks Col 19 Lines 5-15 disclose using RAM memory which is a type of temporary memory to be accessed by the processor and there being multiple memories) a second buffer memory (Hicks Col 19 Lines 5-15 disclose using RAM memory which is a type of temporary memory to be accessed by the processor and there being multiple memories) from the plurality of lidar sensors (Hicks Col 12 Lines 55-65 disclose a first and second lidar sensor providing a plurality of sensors) and the plurality of lidar sensors (Hicks Col 12 Lines 55-65 disclose a first and second lidar sensor providing a plurality of sensors) into the first buffer memory (Hicks Col 19 Lines 5-15 disclose using RAM memory which is a type of temporary memory to be accessed by the processor and there being multiple memories) ; scanning the object via the first lidar sensor (Hicks Col 12 Lines 55-65 disclose scanning the environment to create a first point cloud with the first lidar device) ; scanning the object via the second lidar sensor (Hicks Col 12 Lines 55-65 disclose scanning the environment to create a second point cloud with the second lidar device) ; wherein the scanning via the second lidar sensor (Hicks Col 12 Lines 55-65 disclose scanning the environment to create a second point cloud with the second lidar device) is phase-shifted (Hicks Col 16 Lines 20-32 disclose analyzing the phase shift of time in the lidar system) from the scanning via the first lidar (Hicks Col 12 Lines 55-65 disclose scanning the environment to create a first point cloud with the first lidar device) ; merging cloud data (Hicks Col 13 Lines 15-25 disclose the fusion of different sensor data) from the first lidar sensor (Hicks Col 12 Lines 55-65 disclose scanning the environment to create a first point cloud with the first lidar device) and cloud data from the second lidar sensor (Hicks Col 12 Lines 55-65 disclose scanning the environment to create a second point cloud with the second lidar device) ; storing (Hicks Col 15 Lines 5-8 disclose local storage) the merged cloud data (Hicks Col 13 Lines 15-25 disclose the fusion of different sensor data) into the second buffer memory (Hicks Col 19 Lines 5-15 disclose using RAM memory which is a type of temporary memory to be accessed by the processor and there being multiple memories) the plurality of lidar sensors (Hicks Col 12 Lines 55-65 disclose a first and second lidar sensor providing a plurality of sensors) ; performing parallel pre-process data (Hicks Col 21 Lines 20-25 disclose performing parallel processing) in the first buffer memory (Hicks Col 19 Lines 5-15 disclose using RAM memory which is a type of temporary memory to be accessed by the processor and there being multiple memories) and the second buffer memory (Hicks Col 19 Lines 5-15 disclose using RAM memory which is a type of temporary memory to be accessed by the processor and there being multiple memories) independently using independent blob detection algorithms (Hicks Col 2 Lines 35-45, and Col 13 Lines 35-45 disclose using algorithms for object detection and classification on independent frames) ; estimating sizes and shapes of the objects (Hicks Col 5 Lines 20-25 discloses the size of the objects stored in the model and Col 17 Lines 3-8 disclose the shape of the object) stored in the first buffer memory (Hicks Col 19 Lines 5-15 disclose using RAM memory which is a type of temporary memory to be accessed by the processor and there being multiple memories) and the second buffer memory (Hicks Col 19 Lines 5-15 disclose using RAM memory which is a type of temporary memory to be accessed by the processor and there being multiple memories) in the first buffer memory (Hicks Col 19 Lines 5-15 disclose using RAM memory which is a type of temporary memory to be accessed by the processor and there being multiple memories) and the second buffer memory (Hicks Col 19 Lines 5-15 disclose using RAM memory which is a type of temporary memory to be accessed by the processor and there being multiple memories) . Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Yang by incorporating the multiple temporary memories to store the information including the identification and classification of the objects and surrounding environment, as well as the addition of the second lidar scanner and the configuration and type of lidar scanner as taught by Yang to make an invention that can automatically identify and classify objects in a broad field of view, while lessening computation time as a whole for the system; thus one of ordinary skilled in the art would be motivated to combine the references since there is a need to identify the important object in the field of view since the whole field of view does not usually include details that are important for navigation. In some systems, a visual light camera is used in AV systems to compensate for the slower frame rate of the lidar. The camera images are processed to select areas of interest. The lidar can be directed to perform a more detailed scan of the selected areas. . (Hicks Col 1 Lines 50-60 ). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. Regarding Claim 6, Yang in view of Hicks teaches the computer vision apparatus of claim 5, wherein the computer programs further comprising the steps of (Yang ¶0025, ¶0042-¶0043 and Fig 15 disclose a computer machine executing instructions that implement the methodologies) : assigning an identification code (ID) (Hicks Col 14 Lines 5-10 discloses assigning identifiers to identified objects) to the moving object (Yang ¶0047, and ¶0065 discloses identifying and determining if an object is moving or not) in the first buffer memory (Hicks Col 19 Lines 5-15 disclose using RAM memory which is a type of temporary memory to be accessed by the processor and there being multiple memories) ; obtaining information (Hicks Col 1 Lines 45-50 discloses obtaining information of the scene ) including the ID (Hicks Col 14 Lines 5-10 discloses assigning identifiers to identified objects) , position, speed (Yang ¶0034 discloses obtaining the position and speed) , and time stamp (Yang ¶0065 discloses the image captured and recorded with a timestamp) when the moving object (Yang ¶0047, and ¶0065 discloses identifying and determining if an object is moving or not) passes a certain point in a camera view of the camera sensor (Yang ¶0073 discloses a threshold distance from the sensor to trigger the sensors to act) ; and m atching (Yang ¶0073 discloses synchronizing the camera and lidar sensors) an incoming moving blob corresponding to the moving object (Yang ¶0070 discloses the lidar point cloud and image corresponding) by using the information received by the camera sensor (Yang ¶0065 discloses the image captured and recorded with a timestamp) when the incoming moving blob passes a certain point (Yang ¶0098 discloses triggering the module to control the lidar within the field of view of the camera based on a threshold distance) in a lidar view corresponding to the certain point in the camera view (Yang ¶0073 discloses a threshold distance from the sensor to trigger the sensors to act) . See rationale for Claim 5, its parent claim. Regarding Claim 7, Yang in view of Hicks teaches the computer vision apparatus of claim 6, wherein the computer programs further include the steps of (Yang ¶0025, ¶0042-¶0043 and Fig 15 disclose a computer machine executing instructions that implement the methodologies) : updating a classification (Hicks Col 7 Lines 30-40 and Col 10 Lines 4-10, and Col 9 Lines 25-30 disclose the object being classified and tracked and updated point cloud being used to model the object) of the moving object (Yang ¶0047, and ¶0065 discloses identifying and determining if an object is moving or not) obtained by the camera sensor (Hicks Col 14 Lines 10-15 disclose classifying based on the sensor data) using those of the lidar sensor when the incoming blob enters the lidar view (Yang ¶0098 discloses triggering the module to control the lidar within the field of view of the camera based on a threshold distance) and has obtained the classification of the moving blob (Hicks Col 7 Lines 30-40 and Col 10 Lines 4-10, and Col 9 Lines 25-30 disclose the object being classified and tracked and updated point cloud being used to model the object) . See rationale for Claim 5, its parent claim. Regarding Claim 8, Yang in view of Hicks teaches the computer vision apparatus of claim 7, wherein the lidar sensor (Yang ¶0005 discloses lidar sensors) includes a rotation mechanism (Hicks Col 16 Lines 32-40 disclose the lidar being able to rotate) . See rationale for Claim 5, its parent claim. Regarding Claim 9, Yang in view of Hicks teaches the computer vision apparatus of claim 7, wherein the plurality (Hicks Col 12 Lines 55-65 disclose a first and second lidar sensor providing a plurality of sensors) of lidar sensors (Yang ¶0005 discloses lidar sensors) includes a solid state lidar sensor (Hicks Col 17 Lines 24-30 discloses that the lidar may be a flash lidar which is a type of solid state lidar) . See rationale for Claim 5, its parent claim. Reference Cited The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. US-20190387216-A1 to Hicks discloses a method to post process lidar and camera data to model a scene. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Enter examiner's name" \* MERGEFORMAT RACHEL LYNN ROBERTS whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)272-6413 . The examiner can normally be reached FILLIN "Work schedule?" \* MERGEFORMAT Monday- Friday 7:30am- 5:00pm . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Oneal Mistry can be reached on (313) 446-4912 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RACHEL L ROBERTS/ Examiner, Art Unit 2674 /ONEAL R MISTRY/ Supervisory Patent Examiner, Art Unit 2674
Read full office action

Prosecution Timeline

Apr 04, 2023
Application Filed
Feb 25, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12581132
LARGE-SCALE POINT CLOUD-ORIENTED TWO-DIMENSIONAL REGULARIZED PLANAR PROJECTION AND ENCODING AND DECODING METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12569208
PET APPARATUS, IMAGE PROCESSING METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 10, 2026
Patent 12564324
IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING SYSTEM FOR ABNORMALITY DETECTION
2y 5m to grant Granted Mar 03, 2026
Patent 12561773
METHOD AND APPARATUS FOR PROCESSING IMAGE, ELECTRONIC DEVICE, CHIP AND MEDIUM
2y 5m to grant Granted Feb 24, 2026
Patent 12525028
CONTACT OBJECT DETECTION APPARATUS AND NON-TRANSITORY RECORDING MEDIUM
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
90%
Grant Probability
99%
With Interview (+14.3%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 19 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month