Prosecution Insights
Last updated: April 19, 2026
Application No. 18/460,213

GENERATING OCCLUSION ATTRIBUTES FOR OCCLUDED OBJECTS

Non-Final OA §103§112
Filed
Sep 01, 2023
Examiner
O'MALLEY, CONOR AIDAN
Art Unit
2675
Tech Center
2600 — Communications
Assignee
GM Cruise Holdings LLC
OA Round
2 (Non-Final)
67%
Grant Probability
Favorable
2-3
OA Rounds
3y 0m
To Grant
72%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
16 granted / 24 resolved
+4.7% vs TC avg
Moderate +6% lift
Without
With
+5.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
26 currently pending
Career history
50
Total Applications
across all art units

Statute-Specific Performance

§101
24.2%
-15.8% vs TC avg
§103
35.9%
-4.1% vs TC avg
§102
16.7%
-23.3% vs TC avg
§112
22.0%
-18.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 24 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 10-15 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 10 recites the limitation "camera space of the vehicle" in lines 11-12 of the claim. There is insufficient antecedent basis for this limitation in the claim. “Camera space of the vehicle” lacks antecedent basis as it is unclear to which vehicle it belongs to. Is it the claimed vehicle or is it the additional vehicle? Further clarification about which vehicle is doing what would be necessary to fix this issue. All dependent claims upon claim 10, claims 11-15, inherit this dependency and are similarly rejected. Claim 11 recites the limitation "by the vehicle" in the fourth line of the claim. There is insufficient antecedent basis for this limitation in the claim. “By the vehicle” lacks antecedent basis as it is unclear to which vehicle it belongs to. Is it the claimed vehicle or is it the additional vehicle? Further clarification about which vehicle is doing what would be necessary to fix this issue. Claim 12 recites the limitation "camera of the vehicle" in the last line. There is insufficient antecedent basis for this limitation in the claim. “Camera of the vehicle” lacks antecedent basis as it is unclear to which vehicle it belongs to. Is it the claimed vehicle or is it the additional vehicle? Further clarification about which vehicle is doing what would be necessary to fix this issue. Claim 13 recites the limitation "is detected by the vehicle" in the last line. There is insufficient antecedent basis for this limitation in the claim. “Is detected by the vehicle” lacks antecedent basis as it is unclear to which vehicle it belongs to. Is it the claimed vehicle or is it the additional vehicle? Further clarification about which vehicle is doing what would be necessary to fix this issue. Claim 14 recites the limitation "detection range of the vehicle" in the last line. There is insufficient antecedent basis for this limitation in the claim. “Detection range of the vehicle” lacks antecedent basis as it is unclear to which vehicle it belongs to. Is it the claimed vehicle or is it the additional vehicle? Further clarification about which vehicle is doing what would be necessary to fix this issue. Claim 15 recites the limitation "not yet detected by the vehicle" in the last two lines. There is insufficient antecedent basis for this limitation in the claim. “Not yet detected by the vehicle” lacks antecedent basis as it is unclear to which vehicle it belongs to. Is it the claimed vehicle or is it the additional vehicle? Further clarification about which vehicle is doing what would be necessary to fix this issue. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-2, 4-11, and 13-20 are rejected under 35 U.S.C. 103 as being unpatentable over Pendleton et al (GB 2610663 A), hereinafter referred to as Pendleton, in view of Shi et al. (US 20230343018 A1), hereinafter referred to as Shi. In regards to claim 1, Pendleton discloses a method for generating one or more occlusion attributes based on object information detected by a first vehicle and a second vehicle in a scene, the method comprising: receiving a first three-dimensional shape representing a first object in the scene, wherein the first three-dimensional shape is determined by the first vehicle, and the first object is within a first field of perceivable area of one or more sensors of the first vehicle (Paragraphs 59 and 93, abstract, and figure 1, Paragraph 59 discloses both a 2d and 3d camera on a vehicle that is equipped to take pictures of a scene and communicate them to another vehicle. Abstract merely reinforces this idea that the sensors detect an object that occludes another object with figure 1 showing that the vehicles communicate with each other through the network over the various objects in the scene with paragraph 93 showing that the vehicle reconstructs a 2D and 3D map of the objects in the scene from the perception system which is the cameras of the various vehicles in the scene); determining that the first object is at least partially occluded by a second object, wherein the second object is within a second field of perceivable area of one or more sensors of the second vehicle (Abstract, paragraph 58, and figure 10, Paragraph 58 has the vehicles be within the same area which would be within the range of the sensors more generally, and the abstract states that the invention is directed towards detecting the occlusion of an object which would read on partial occlusion with figure 10’s occlusion sones and small pedestrian being an example of partial occlusion); projecting the first three-dimensional shape onto a two-dimensional camera space of the second vehicle to determine a first two-dimensional shape representing the first object (Paragraphs 59 and 185, Paragraph 59 discloses both 3d and 3d cameras and that it can communicate with other vehicles and paragraph 185 discloses the usage of bounding boxes for the images which is in line with what is projected onto the image in regards to the specification); determining a second two-dimensional shape in the two-dimensional camera space representing the second object in the scene (Paragraphs 59, 91, and 93, Paragraph 59 discloses both 3d and 3d cameras and that it can communicate with other vehicles and paragraph 91 further covers the determination of what kind of object it is and the projection as it receives the data and identifies objects within the field of view of one of the cameras which would cover the identification of multiple objects and paragraph 93 discloses a 2d and 3d map of the scene generated from the cameras of the perception system); storing a first occlusion attribute indicating that the first object is occluded by at least the second object (Paragraph 97, Abstract, Paragraph 97 discloses the storage of image data, the abstract includes the occlusion of a vehicle as part of the sensor detection abilities which would include it as image data that is stored in 97); and storing a second occlusion attribute indicating an extent of which the second two-dimensional shape is within an area of the first two-dimensional shape (Paragraphs 97, 30, and 136, Paragraph 97 discloses the storage of image data, with paragraph 30 including an occlusion map and a perception visibility model with paragraph 136 mentioning an occlusion level map that models the severity of occlusion which would cover the extent). Pendleton does not disclose wherein projecting the first three-dimensional shape onto the two-dimensional camera space comprises: ray-tracing outer points of the first three-dimensional shape onto coordinates in the two- dimensional camera space of the second vehicle. Shi does disclose wherein projecting the first three-dimensional shape onto the two-dimensional camera space comprises: ray-tracing outer points of the first three-dimensional shape onto coordinates in the two- dimensional camera space of the second vehicle (Paragraphs 75 and 114 along with Figure 1, paragraph 114 discloses that it uses ray-tracing to change all three dimensional coordinates to two dimensional coordinates, which would include “outer points” with paragraph 75 stating that the main purpose of ray-tracing is to project an object in a 3D space to a two-dimensional screen space and with Figure 1 showing that some of the rays in this ray tracing process touch the outer points of the object in that image). It would have been prima facie obvious to combine the teachings of these disclosures as doing so would have led to a predictable increase in accuracy in driving systems and in the size of the object. Being able to further specify where the object is would allow for greater accuracy as to the size of the object would prevent an autonomous vehicle using this system from swerving out of the way if it inaccurately measured the size of the object and had the 2d space jutting into the lane. As such, it would have been prima facie obvious to combine these teachings. In regards to claim 2, Pendleton discloses wherein determining the second two-dimensional shape comprises: receiving a second three-dimensional shape representing the second object in the scene, wherein the second three-dimensional shape is determined by the second vehicle (Paragraph 59 and abstract, Paragraph 59 discloses both a 2d and 3d camera on a vehicle that is equipped to take pictures of a scene and communicate them to another vehicle. Abstract merely reinforces this idea that the sensors detect an object that occludes another object); and projecting the second three-dimensional shape onto the two-dimensional camera space to determine the second two-dimensional shape representing the second object (Paragraph 59 and paragraph 91, Paragraph 59 discloses both 3d and 3d cameras and that it can communicate with other vehicles and paragraph 91 further covers the determination of what kind of object it is and the projection as it receives the data and identifies objects within the field of view of one of the cameras which would cover the identification of multiple objects). In regards to claim 4, Pendleton discloses further comprising: receiving one or more additional three-dimensional shapes representing one or more additional objects in the scene, wherein the one or more additional three-dimensional shapes are determined by the first vehicle (Paragraph 59 and abstract, Paragraph 59 discloses both a 2d and 3d camera on a vehicle that is equipped to take pictures of a scene and communicate them to another vehicle. Abstract merely reinforces this idea that the sensors detect an object that occludes another object); and determining, based on the one or more additional three-dimensional shapes, that at least one of the one or more additional objects is detected by the second vehicle (Paragraph 40, Paragraph 40 discloses that there is a specified range for the sensors for object detections, therefore, as long as the ranges overlap then this can be determined). In regards to claim 5, Pendleton discloses further comprising: receiving one or more additional three-dimensional shapes representing one or more additional objects in the scene, wherein the one or more additional three-dimensional shapes are determined by the first vehicle (Paragraph 59 and abstract, Paragraph 59 discloses both a 2d and 3d camera on a vehicle that is equipped to take pictures of a scene and communicate them to another vehicle. Abstract merely reinforces this idea that the sensors detect an object that occludes another object); and determining, based on the one or more additional three-dimensional shapes, that at least one of the one or more additional objects is outside a detection range of the second vehicle (Paragraph 40, Paragraph 40 discloses that there is a specified range for the sensors for object detections, therefore, as long as the ranges don’t overlap then this can be determined). In regards to claim 6, Pendleton discloses wherein determining that the first object is at least partially occluded by the second object comprises: determining, based on the first three-dimensional shape, that the first object is not yet detected by the second vehicle (Abstract, This abstract states that the likelihood of detection at specific locations is covered by the occlusion map which would cover this as if the likelihood was zero, then we would know that it hasn’t detected the object yet). In regards to claim 7, Pendleton discloses wherein projecting the first three-dimensional shape onto the two-dimensional camera space comprises: translating the first three-dimensional shape defined in a common reference frame to a local reference frame used by the second vehicle (Paragraph 93, Paragraph 93 discloses the localization system which would bring things from a common frame of reference to a more local one). In regards to claim 8, Shi discloses the coordinates define boundary points of the first two-dimensional shape (Paragraph 114 along with Figure 1, paragraph 114 discloses that it uses ray-tracing on all vertices to change all three dimensional coordinates to two dimensional coordinates, which would include “boundary points” with Figure 1 showing that some of the rays in this ray tracing process touch the boundary points of the object in that image). In regards to claim 9, Pendleton discloses wherein projecting the first three-dimensional shape onto the two-dimensional camera space comprises: providing the first three-dimensional shape and a camera image captured by an image sensor of the second vehicle as inputs to a computer vision system; and determining, by the computer vision system, the first two-dimensional shape (Paragraph 59 and paragraph 91, Paragraph 59 discloses both 3d and 3d cameras and that it can communicate with other vehicles and paragraph 91 further covers the determination of what kind of object it is and the projection as it receives the data and identifies objects within the field of view of one of the cameras which would cover computer vision). In regards to claims 10 and 16, they are similar to claim 1, but claim 10 does add some additional elements. Claim 10 additionally claims “vehicle controls to cause the vehicle in to drive in the area;” (Paragraph 95 of Pendleton discloses a control system for a vehicle that allow it to drive in an area) and “and generate a command to the vehicle control based on the first occlusion attribute and the second occlusion attribute.” (Paragraphs 94-95 of Pendleton, Paragraph 94 additional includes the localization system that uses data that corresponds to the position of the vehicle such as the occlusion attributes and 95 discloses the control system acts on such data). Those additional elements of claim 10 cover any additions made by claim 16, and render both claims rejected under 35 U.S.C. 103. In regards to claim 11, it is similar to claim 2, and it is similarly rejected. In regards to claim 13, it is similar to claim 4, and it is similarly rejected. In regards to claim 14, it is similar to claim 5, and it is similarly rejected. In regards to claims 15 and 17, they are similar to claim 6, and they are similarly rejected. In regards to claim 18, it is similar to claim 7, and it is similarly rejected. In regards to claim 19, it is similar to claim 8, and it is similarly rejected. In regards to claim 20, it is similar to claim 9, and it is similarly rejected. Claims 3 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Pendleton et al (GB 2610663 A), hereinafter referred to as Pendleton, in view of Hantehzadeh et al. (US 20220358775 A1), hereinafter referred to as Hantehzadeh. In regards to claim 3, Pendleton does not explicitly disclose wherein determining the second two-dimensional shape comprises: determining the second two-dimensional shape that corresponds to the second object by performing image segmentation on a camera image captured by the second vehicle. However, Hantehzadeh does disclose wherein determining the second two-dimensional shape comprises: determining the second two-dimensional shape that corresponds to the second object by performing image segmentation on a camera image captured by the second vehicle (Paragraph 13, This paragraph discloses the use of segmentation on an image for identification purposes). It would have been prima facie obvious to combine the teachings of these two disclosures as it would allow for a predictable increase in the efficiency of resources. Allowing for image segmentation would allow for more relevant parts of an image to be purposefully cut out of the image. As such, resources could be focused only on areas of greater importance which would allow for increased efficiency and a decrease in the number of resources used in this situation. As such, it would have been prima facie obvious to combine. In regards to claim 12, it is similar to claim 3, and it is similarly rejected. Response to Amendment The amendments, entered 12/19/2025, have been considered in full and overcome the objections and some of the 112(b) rejections. Response to Arguments Applicant’s arguments with respect to claims 1-20 under 35 U.S.C. 103 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. The remaining 112(b) rejections that were not overcome by amendment did not have any arguments against the rejections. As such, they remain as they have not been addressed. Arguments made against the objections, the application of Liem, and the 102 arguments were persuasive. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CONOR AIDAN O'MALLEY whose telephone number is (571)272-0226. The examiner can normally be reached Monday - Friday 9:00 am. - 5:00 pm. EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Moyer can be reached at 5722729523. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. CONOR AIDAN. O'MALLEY Examiner Art Unit 2675 /CONOR A O'MALLEY/Examiner, Art Unit 2675 /ANDREW M MOYER/Supervisory Patent Examiner, Art Unit 2675
Read full office action

Prosecution Timeline

Sep 01, 2023
Application Filed
Sep 29, 2025
Non-Final Rejection — §103, §112
Nov 04, 2025
Interview Requested
Nov 18, 2025
Examiner Interview Summary
Nov 18, 2025
Applicant Interview (Telephonic)
Dec 19, 2025
Response Filed
Feb 20, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573234
BLINK DETECTION IN CABIN USING DYNAMIC VISION SENSOR
2y 5m to grant Granted Mar 10, 2026
Patent 12555254
MEDICAL IMAGE PROCESSING APPARATUS, MEDICAL IMAGE PROCESSING APPARATUS METHOD, AND NON-TRANSITORY, COMPUTER-READABLE MEDIUM
2y 5m to grant Granted Feb 17, 2026
Patent 12541866
MEDICAL IMAGE PROCESSING APPARATUS, METHOD, AND COMPUTER READABLE MEDIUM THAT ANALYZE A FLUORESCENCE IMAGE FROM PHOSPHOR IN BIOLOGICAL TISSUE
2y 5m to grant Granted Feb 03, 2026
Patent 12536776
TEACHING METHOD AND TRANSFER SYSTEM FOR SUBSTRATE USING THREE-DIMENSIONAL IMAGE DATA
2y 5m to grant Granted Jan 27, 2026
Patent 12488417
PARAMETRIC COMPOSITE IMAGE HARMONIZATION
2y 5m to grant Granted Dec 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

2-3
Expected OA Rounds
67%
Grant Probability
72%
With Interview (+5.7%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 24 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month