Prosecution Insights
Last updated: April 19, 2026
Application No. 18/953,407

Method and Device for Recognizing Distant Object by Vehicle with Autonomous Driving

Non-Final OA §101§102§103
Filed
Nov 20, 2024
Examiner
LAMBERT, GABRIEL JOSEPH RENE
Art Unit
3669
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Kia Corporation
OA Round
1 (Non-Final)
67%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
79%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
87 granted / 130 resolved
+14.9% vs TC avg
Moderate +12% lift
Without
With
+11.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
23 currently pending
Career history
153
Total Applications
across all art units

Statute-Specific Performance

§101
18.0%
-22.0% vs TC avg
§103
38.3%
-1.7% vs TC avg
§102
14.9%
-25.1% vs TC avg
§112
27.5%
-12.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 130 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. 101 Analysis – Step 1 Claims 1-20 are directed to a method (i.e. a process). 101 Analysis – Step 2A, Prong 1 Regarding Prong 1 of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether they recite subject matter that falls within one of the follow groups of abstract ideas: a) mathematical concepts, b) certain methods of organizing human activity, and/or c) mental processes. Claims 1-20 includes limitations that recite an abstract idea (emphasized below in bold) and will be used as a representative claim for the remainder of the 101 rejection. Claim 1: A method performed by an apparatus of a vehicle, the method comprising: obtaining, via a camera of the vehicle, an image of an exterior view from the vehicle; generating a cropped image of a distant region in the obtained image; performing first object recognition on the cropped image of the distant region, wherein the cropped image has an original resolution of the obtained image; performing second object recognition on a processed image associated with the obtained image, wherein the processed image has a down-sampled resolution of the obtained image; performing third object recognition by matching a result of the first object recognition with a result of the second object recognition; and controlling, based on a result of the third object recognition, an operation of the vehicle. The examiner submits that the foregoing bolded limitations constitute a “mental process” because user its broadest reasonable interpretation, the claim covers performance of the limitation in the human mind. For example, “generating a cropped image of a distant region in the obtained image” in the context of the claim encompasses the user cropping the received image, which can be done by isolating part of the image by drawing borders around the distant region. Next, the limitation of “performing a first object recognition on the cropped image” in the context of the claim encompasses the user looking at the cropped image and seeing if the user can recognize an object, which recites a mental process. Next, the limitation of “performing second object recognition on a processed image associated with the obtained image, wherein the processed image has a down-sampled resolution of the obtained image” in the context of the claim encompasses the user performing object recognition on a received processed image (wherein the resolution is already down-sampled). Additionally, a user can down-sample the image itself, by removing rows and/or column of pixels, which is a process that can be done on a piece of paper. Similarly, the limitation of “performing third object recognition by matching a result of the first object recognition with a result of the second object recognition” in the context of the claim encompasses the user looking at both results of the first and second object recognition, and identifying an object based on matching result, which is a mental process. Since this limitation can be done in the mind, then this limitation recites a mental process. Accordingly, the claim recites at least one abstract idea. The same rational applies to independent claim 12. Claim 4: wherein the performing of the first object recognition further comprises determining, based on information associated with the vehicle, whether recognition of a distant object is necessary. Claim 5: wherein the information associated with the vehicle comprises at least one of: location information, speed information, steering information, or heading indication information. Regarding claims 4 and 5, the limitation of performing a first object recognition comprises determining whether recognition of a distant object is necessary based on received vehicle parameters in the context of the claim encompasses the user making a decision of whether recognition of a distant object is necessary based on received data. Making a decision of not performing the first object recognition is a mental process, and therefore this limitation recites an abstract idea. The same rational applies to claims 15-16. Claim 6: wherein the generating of the cropped image comprises: determining a vanishing point in the obtained image; and determining the distant region by determining a region of interest that comprises the vanishing point. Claim 7: wherein the determining of the distant region comprises: setting, based on camera calibration information of the vehicle, the vanishing point as a reference point for the region of interest having the original resolution. Regarding claims 6-7, the limitation of determining a vanishing point in the obtained image, and determining the distant region that comprises the vanishing point in the context of the claim encompasses the user identifying the vanishing point, and using that to determine the region of interest. Since this limitation can be done mentally, then this limitation recites a mental process. Next, the limitation of setting the vanishing point based on received camera calibration information is a limitation that can be done on a piece of paper (i.e. setting a point comprises marking a point), and therefore recites a mental process i.e. an abstract idea. The same rational applies to claims 17-18. Claim 11: wherein the performing of the third object recognition comprises: generating an aligned heat map by: scaling the first object recognition heat map of the distant region according to scaling information of the distant region; and matching the scaled first object recognition heat map with the second object recognition heat map for the overall region; and performing, based on the aligned heat map, the third object recognition. Regarding claim 11, the limitation of generating an aligned heat map in the context of the claim encompasses the user scaling the first heat map of the distant region according to scaling information, which is a limitation that can be done on a piece of paper (i.e. adjusting the dimensions of the heat map to match the cropped region), and therefore recites a mental process. Next, the limitation of matching one map to another in the context of the claim encompasses the user taking the two maps and identifying how analogous they are to each other (i.e. matching), which is a process that can be done mentally. Lastly, performing a third object recognition based on the aligned maps is a process that can be done mentally, since the process of identifying an object on a map is considered a mental process. 101 Analysis – Step 2A, Prong 2 Regarding prong II of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether the claim, as a whole, integrates the abstract idea into a practical application. As noted in the 2019 PEG, it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application.” In the present case, the additional limitations beyond the above-noted abstract idea are as follows (where the underlined portions are the “additional limitations” while the bolded portions continue to represent the “abstract idea”): Claim 1: A method performed by an apparatus of a vehicle, the method comprising: obtaining, via a camera of the vehicle, an image of an exterior view from the vehicle; generating a cropped image of a distant region in the obtained image; performing first object recognition on the cropped image of the distant region, wherein the cropped image has an original resolution of the obtained image; performing second object recognition on a processed image associated with the obtained image, wherein the processed image has a down-sampled resolution of the obtained image; performing third object recognition by matching a result of the first object recognition with a result of the second object recognition; and controlling, based on a result of the third object recognition, an operation of the vehicle. Claim 2: wherein the performing of the first object recognition comprises inputting the cropped image having the original resolution into a first object recognition network. Claim 3: wherein the performing of the second object recognition comprises inputting the processed image having the down-sampled resolution into a second object recognition network. Claim 8: wherein the determining of the distant region further comprises storing scaling information of the region of interest with the original resolution. Claim 9: wherein the performing of the first object recognition comprises receiving, based on a first object recognition network, a first object recognition heat map of the distant region. Claim 10: wherein the performing of the second object recognition comprises receiving a second object recognition heat map for an overall region of the obtained image. For the following reason(s), the examiner submits that the above identified additional limitations do not integrate the above-noted abstract idea into a practical application. Regarding the additional limitations of “receiving data”, “inputting data”, and “storing data” from the underlined claims as recited above, the examiner submits that these limitations are insignificant extra-solution activities that merely use a computer (i.e. processors) to perform the process. In particular, the receiving step, the inputting step, and the storing step by the processors are recited at a high level of generality (i.e. as a general means of gathering data, inputting data, and storing data), and amounts to mere data gathering and data storage, which is a form of insignificant extra-solution activity. Lastly, the “one or more processors” merely describes how to generally “apply” the otherwise mental judgements in a generic or general purpose vehicle environment. The vehicle control system is recited at a high level of generality and merely automates the recognizing steps. Lastly, the additional limitation of the independent claims “controlling, based on a result of the third object recognition, an operation of the vehicle” does not specifically recite what an operation of the vehicle is, and in Para. 0048 of the specifications filed 11/20/2024 discloses that an operation of the vehicle can comprise “sensor control”, and “alarm timing control”, which is broad enough to capture the control of a display of the vehicle (i.e. extra-solution activity), and that is significantly different that controlling the vehicle’s acceleration based on the third object recognition. Therefore, this controlling step is insignificant extra-solution activity that merely uses a computer to perform the process. The same rational applies to claims 12-14, and 19-20. Thus, taken alone, the additional elements do not integrate the abstract idea into a practical application. Further, looking at the additional limitation(s) as an ordered combination or as a whole, the limitation(s) add nothing that is not already present when looking at the elements taken individually. For instance, there is no indication that the additional elements, when considered as a whole, reflect an improvement in the functioning of a computer or an improvement to another technology or technical field, apply or use the above-noted judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, implement/use the above-noted judicial exception with a particular machine or manufacture that is integral to the claim, effect a transformation or reduction of a particular article to a different state or thing, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is not more than a drafting effort designed to monopolize the exception (MPEP § 2106.05). Accordingly, the additional limitation(s) do/does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. 101 Analysis – Step 2B Regarding Step 2B of the 2019 PEG, representative independent claims 1,8, and 15 does not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a processor to perform the determining and comparing amounts to nothing more than applying the exception using a generic computer component. Generally applying an exception using a generic computer component cannot provide an inventive concept. And as discussed above, with regards to the additional limitations of “receiving data”, “inputting data”, “storing data”, and “controlling an operation of the vehicle”, the examiner submits that these limitations are insignificant extra-solution activities. Further, a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be re-evaluated in Step 2B to determine if they are more than what is well-understood, routine, conventional activity in the field. The additional limitations of “receiving” and “storing data” are well-understood, routine, and conventional activities because the specification does not provide any indication that the processor is anything other than a conventional processor for receiving and storing data. The step of “receiving” data is taught in the primary reference Shen et al. US20200175326A1, see Para. 0032. The step of “storing data” is further taught in Shen et al. US20200175326A1, see Para. 0024. The step of “inputting data” in a neural network is further taught in Shen et al. US20200175326A1, See at least Para. 0039. Lastly, the step of controlling an operation of the vehicle based on the third object recognition is further taught in Shen et al. US20200175326A1,Para. 0039. Accordingly, the step of receiving, storing, inputting data, and controlling an operation of a vehicle based on data is well-understood, routine, and conventional activity in the field. For these reasons, there is no inventive concept and the claim is not patent eligible. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-3, 6, 12-14, and 17 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Shen et al. US20200175326A1 (henceforth Shen). Regarding claim 1, Shen discloses: A method performed by an apparatus of a vehicle, (See at least Fig. 2) the method comprising: obtaining, via a camera of the vehicle, an image of an exterior view from the vehicle; (See at least Fig. 2, step 202, Para. 0032, “At step 202, system 100 receives a high resolution image and one or more pieces of data relating to the image (e.g., as illustrated in FIG. 4). In some embodiments, the high resolution image is a frame or image generated as the output of a camera.” The image of the exterior view from the vehicle is obtained.) generating a cropped image of a distant region in the obtained image; performing first object recognition on the cropped image of the distant region, wherein the cropped image has an original resolution of the obtained image; (See at least Fig. 4, step 206, Para. 0036, “At step 206, system 100 crops the priority FOV to generate a high resolution crop of the image (example shown in FIG. 4). As used herein, a “crop” is a predetermined segment or portion of the image…The high resolution crop preferably has the same resolution as the raw image”. A cropped image of a distant region (see Fig. 4, wherein the distant region comprises two cars ahead), wherein the cropped image has the same resolution of the obtained image. Further see Para. 0039, wherein the detector performs a first object recognition on the cropped image.) performing second object recognition on a processed image associated with the obtained image, wherein the processed image has a down-sampled resolution of the obtained image; (See at least Para. 0037, “a low resolution version of the original image with a large, down sampled field of vision”, the image has a down-sampled resolution of the obtained image. The detector 212 performs a second objected recognition of the image.) performing third object recognition by matching a result of the first object recognition with a result of the second object recognition; (See at least Fig. 5, and Para. 0038-0039, wherein the two images from the first and second object recognition via the detector 212 includes combining the outputs to remove duplicates. A result of the first object detection is matched with a result of the second object detection. Additionally, see Para. 0043, wherein the output from the high-resolution crop with the output of the low resolution image is aligned (i.e. matched).) and controlling, based on a result of the third object recognition, an operation of the vehicle. (See at least Para. 0039, “The output may be usable by the system 100, or another system of one or more processors, to drive, and/or otherwise control operation of, an autonomous vehicle.” Additionally, see Para. 0059, wherein the output 516 is used in autonomous navigation, driving, and operation of the vehicle.) Regarding claim 2, Shen discloses: wherein the performing of the first object recognition comprises inputting the cropped image having the original resolution into a first object recognition network. (See at least Para. 0039, wherein the cropped image is input into a neural network for object detection.) Regarding claim 3, Shen discloses: wherein the performing of the second object recognition comprises inputting the processed image having the down-sampled resolution into a second object recognition network. (See at least Para. 0039, wherein the down-sampled image is input into a neural network for object detection. Additionally see Para. 0039, “The images (e.g., for the same frame) can be fed into the same detector, two parallel instances of the same detector, different detectors (e.g., one for the high-resolution crop, one for the low-resolution full image), or otherwise processed“, wherein a second object recognition network is used for inputting the down-sampled resolution image.) Regarding claim 6, Shen discloses: wherein the generating of the cropped image comprises: determining a vanishing point in the obtained image; and determining the distant region by determining a region of interest that comprises the vanishing point. (See at least Para. 0024, wherein a vanishing line (i.e. a line is a variety of points) is determined such that the region of interest comprises the vanishing point.) Regarding claim 12, Shen discloses the same limitations as recited in claim 1 above, and is therefore rejected under the same rational. Regarding claim 13, Shen discloses the same limitations as recited in claim 2 above, and is therefore rejected under the same rational. Regarding claim 14, Shen discloses the same limitations as recited in claim 3 above, and is therefore rejected under the same rational. Regarding claim 17, Shen discloses the same limitations as recited in claim 6 above, and is therefore rejected under the same rational. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 4-5 and 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over Shen in view of GomezCaballero et al. US20190026918A1 (henceforth GomezCaballero). Regarding claim 4, Shen discloses the limitations as recited in claims 1 and 2 above. Shen does not specifically state wherein the performing of the first object recognition further comprises determining, based on information associated with the vehicle, whether recognition of a distant object is necessary. However, GomezCaballero teaches: wherein the performing of the first object recognition further comprises determining, based on information associated with the vehicle, whether recognition of a distant object is necessary. (See at least Para. 0053, wherein a weight of the priority for the far object recognition processing and the near object recognition is determined based on analyzing road features at the location of the vehicle (i.e. based on information associated with the vehicle). Additionally, see Para. 0056-0057, wherein the vehicle data is used to determine the weight of the priority for the far/near object recognition processing. Therefore, it is determined whether recognition of a distant object is necessary a (i.e. low far object recognition priority).) It would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to have modified Shen to incorporate the teachings of GomezCaballero to include the limitation as recited above such that “non-recognition rates of objects in both the near and far regions can be reduced. As a result, traveling safety can be improved” (Para. 0012, GomezCaballero). This would create a more robust object detection system, by setting priorities with regards to far object recognition processing and near object recognition processing. Additionally, a person having ordinary skill in the art would have a reasonable expectation of success in combining the teachings of Shen and GomezCaballero. The claimed invention is merely a combination of known elements and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that the results of the combination would have been predictable. Regarding claim 5, Shen does not specifically state wherein the information associated with the vehicle comprises at least one of: location information, speed information, steering information, or heading indication information. However, GomezCaballer teaches: wherein the information associated with the vehicle comprises at least one of: location information, speed information, steering information, or heading indication information. (See at least Para. 0056-0057, wherein the vehicle data such as speed information is used to determine the weight of the priority for the far/near object recognition processing._ It would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to have modified Shen to incorporate the teachings of GomezCaballero to include the limitation as recited above such that “non-recognition rates of objects in both the near and far regions can be reduced. As a result, traveling safety can be improved” (Para. 0012, GomezCaballero). This would create a more robust object detection system, by setting priorities with regards to far object recognition processing and near object recognition processing. Additionally, a person having ordinary skill in the art would have a reasonable expectation of success in combining the teachings of Shen and GomezCaballero. The claimed invention is merely a combination of known elements and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that the results of the combination would have been predictable. Regarding claim 15, Shen and GomezCaballero discloses the same limitations as recited in claim 5 above, and is therefore rejected under the same rejection and obviousness rational. Regarding claim 16, Shen and GomezCaballero discloses the same limitations as recited in claim 6 above, and is therefore rejected under the same rejection and obviousness rational. Claims 7-8 and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Shen in view of Zhang et al. US20190082156A1 (henceforth Zhang). Regarding claim 7, Shen discloses the limitations as recited in claims 1 and 6 above. Shen further discloses: setting the vanishing point as a reference point for the region of interest having the original resolution. (See at least Para. 0024, wherein the vanishing point is set for the region of interest having the original resolution (see at least Para. 0034).) However, Shen does not specifically state setting, based on camera calibration information of the vehicle, the vanishing point. However, Zhang teaches: setting, based on camera calibration information of the vehicle, the vanishing point. (See at least Para. 0007, “calibrating intrinsic parameters of a set of cameras; extracting corner points associated with a pattern; and computing a vanishing point based on information on the extracted corner points.” The vanishing point is set based on the camera calibration information of the vehicle.) It would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to have modified Shen to incorporate the teachings of Zhang to include the limitation as recited above such that the optical axes of a camera can be aligned (Para. 0007, Zhang). This would create a more robust system for setting up a camera on a vehicle. Additionally, a person having ordinary skill in the art would have a reasonable expectation of success in combining the teachings of Shen and Zhang. The claimed invention is merely a combination of known elements and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that the results of the combination would have been predictable. Regarding claim 8, Shen further discloses: wherein the determining of the distant region further comprises storing scaling information of the region of interest with the original resolution. (See at least Para. 0024, wherein the heuristics database stores a set of heuristics to identify the desired field of view, including a predetermined dimension (i.e. storing information).) Regarding claim 18, Shen and Zhang discloses the same limitations as recited in claim 7 above, and is therefore rejected under the same rejection and obviousness rational. Regarding claim 19, Shen and Zhang discloses the same limitations as recited in claim 8 above, and is therefore rejected under the same rejection and obviousness rational. Claims 9-11 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Shen in view of Schulte et al. US20180300884A1 (henceforth Schulte) Regarding claim 9, Shen discloses the limitations as recited in claim 1 above. Shen further discloses: wherein the performing of the first object recognition comprises receiving, based on a first object recognition network, a first object recognition map of the distant region. (See at least Para. 0018, “it may be appreciated that the percentage of the image cropped may be adjusted. For example, a central fourth of the image may be taken. As another example, a machine learning model may be used to identify a particular strip along a horizontal axis of the image which corresponds to a horizon or other vanishing line. In some embodiments, map data may be used.” Further see Para. 0033-0034.) Shen does not specifically state a “a first object recognition heat map of the distant region”. However, Schulte teaches: a first object recognition heat map of the distant region (See at least Para. 0041, “FPA 104 of IR imaging module 102 may be configured to detect IR radiation from a scene 140 for a field of view (FOV) of FPA 104, and provide IR image data (e.g., via analog or digital signals) representing the IR radiation in response to detecting the IR radiation”. A heat map of a distant region is determined to identify objects in a scene.) It would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to have modified Shen to incorporate the teachings of Schulte to include the limitation as recited above since “the average pixel intensities may advantageously be used to separate objects from each other. As can be observed in the two profile lines of horizontal profile 704 and vertical profile 706, objects may be separated from each other based on the local minimums” (Para. 0070, Schulte), which would create a more robust object recognition system for separating objects from each other. Additionally, a person having ordinary skill in the art would have a reasonable expectation of success in combining the teachings of Shen and Schulte. The claimed invention is merely a combination of known elements and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that the results of the combination would have been predictable. Regarding claim 10, Shen further discloses: wherein the performing of the second object recognition comprises receiving a second object recognition map for an overall region of the obtained image. (See at least Para. 0037, “a low resolution version of the original image with a large, down sampled field of vision”, the image has a down-sampled resolution of the obtained image. The detector 212 performs a second objected recognition of the image.) Shen does not specifically state a “second object recognition heat map for an overall region”. However, Schulte teaches: second object recognition heat map for an overall region (See at least Para. 0046, “FPA 104 may detect IR radiation received from scene 140 that only includes background 142 along optical path 150 for a FOV. In response, ROIC 114 may generate thermal image data (e.g., a thermal image) of background 142.” A second object recognition heat map is determined for an overall region.) It would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to have modified Shen to incorporate the teachings of Schulte to include the limitation as recited above since “the average pixel intensities may advantageously be used to separate objects from each other. As can be observed in the two profile lines of horizontal profile 704 and vertical profile 706, objects may be separated from each other based on the local minimums” (Para. 0070, Schulte), which would create a more robust object recognition system for separating objects from each other. Additionally, a person having ordinary skill in the art would have a reasonable expectation of success in combining the teachings of Shen and Schulte. The claimed invention is merely a combination of known elements and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that the results of the combination would have been predictable. Regarding claim 11, Shen further discloses: wherein the performing of the third object recognition comprises: generating an aligned map by: scaling the first object recognition map of the distant region according to scaling information of the distant region; (See at least Para. 0042, “scaling the high-resolution crop's detected objects down (or the low-resolution image's detected objects up) based on the scaling factor between the high-resolution crop (priority FOV) and the full image”. A first object recognition map is scaled according to scaling information of the distant region.) and matching the scaled first object recognition map with the second object recognition map for the overall region; and performing, based on the aligned map, the third object recognition. (See at least Para. 0043, “ aligning the output from the high-resolution crop with the output of the low-resolution image during output combination. The outputs are preferably aligned based on the location of the high-resolution crop (priority FOV) relative to the full image, but can be otherwise aligned. The outputs are preferably aligned after scaling”. Further see Para. 0044, wherein duplicates are detected and removed based on the aligned map (i.e. a third object recognition).) Shen does not specifically state generating an aligned heat map by matching the scaled first object recognition heat map with the second object recognition heat map for the overall region. However, Schulte teaches: generating an aligned heat map by matching the scaled first object recognition heat map with the second object recognition heat map for the overall region (See at least Para. 0037, 0041, and 0046-0047, wherein two heat maps (i.e. the scaled first object recognition heat map (Para. 0041) and the second object recognition heat map) are aligned by calibrating one heat map to the other.) It would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to have modified Shen to incorporate the teachings of Schulte to include the limitation as recited above since “the average pixel intensities may advantageously be used to separate objects from each other. As can be observed in the two profile lines of horizontal profile 704 and vertical profile 706, objects may be separated from each other based on the local minimums” (Para. 0070, Schulte), which would create a more robust object recognition system for separating objects from each other, and would improve the third object recognition. Additionally, a person having ordinary skill in the art would have a reasonable expectation of success in combining the teachings of Shen and Schulte. The claimed invention is merely a combination of known elements and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that the results of the combination would have been predictable. Regarding claim 20, Shen and Schulte discloses the same limitations as recited in claim 9 as recited above, and is therefore rejected under the same rejection and obviousness rational. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Punjani et al. US20200333270A1 discloses “in each iteration of the iterative refinement approach, carried out to determine a 3D density map and corresponding GS-FSC, 2D particle images in a particular half-set are first aligned to a most recently computed version of the 3D density map (referred to as a “half-map”) associated with that half-set; for example, a version from a previous iterative refinement iteration. For each half-set, the particle images in each half-set, along with the alignments, are used to reconstruct a new version of the 3D density map. Thus, the two resulting 3D half-maps will contain the same 3D structure. Advantageously, the two 3D half-maps are derived from independent data, and thus the 3D structural signal will correlate between the two half-maps, but the noise will generally not correlate between the two half-maps.” (See Para. 0071) Tang US20200217656A1 discloses “calibrating a camera based on camera calibration images to obtain a camera parameter; detecting parallel lane lines to obtain a vanishing point of the parallel lane lines according to the detected parallel lane lines; calculating a pitch angle of the camera according to the camera parameter and the vanishing point; determining information of an object to be detected in an image captured by the camera; and calculating a distance from the object to be detected to the camera and a size of the object to be detected according to the information of the object to be detected, the pitch angle of the camera, and the camera parameter.” (See abstract). Cohen et al. US20170371347A1 discloses “analyzing the at least one image to identify a side of a parked vehicle, identify a first structural feature of the parked vehicle and a second structural feature of the parked vehicle, identify a door edge of the parked vehicle in a vicinity of the first and second structural features, determine a change of an image characteristic of the door edge of the parked vehicle, and alter a navigational path of the host vehicle based at least in part on the change of the image characteristic of the door edge of the parked vehicle.” (See abstract). Any inquiry concerning this communication or earlier communications from the examiner should be directed to GABRIEL J LAMBERT whose telephone number is (571)272-4334. The examiner can normally be reached M-F 10:00 am- 6:00 pm MDT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Erin Piateski can be reached at (571) 270-7429. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Erin M Piateski/Supervisory Patent Examiner, Art Unit 3669 /G.J.L./ Examiner Art Unit 3669
Read full office action

Prosecution Timeline

Nov 20, 2024
Application Filed
Mar 11, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12583464
STREAMING OBJECT DETECTION AND SEGMENTATION WITH POLAR PILLARS
2y 5m to grant Granted Mar 24, 2026
Patent 12584761
METHODS AND SYSTEMS FOR PROVIDING DYNAMIC IN-VEHICLE CONTENT BASED ON DRIVING AND NAVIGATION DATA
2y 5m to grant Granted Mar 24, 2026
Patent 12534880
WORKING MACHINE
2y 5m to grant Granted Jan 27, 2026
Patent 12512901
D-ATIS COLLECTION AND DISSEMINATION SYSTEMS AND METHODS
2y 5m to grant Granted Dec 30, 2025
Patent 12497070
VEHICLE BEHAVIOR GENERATION DEVICE, VEHICLE BEHAVIOR GENERATION METHOD, AND VEHICLE BEHAVIOR GENERATION PROGRAM PRODUCT
2y 5m to grant Granted Dec 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
67%
Grant Probability
79%
With Interview (+11.8%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 130 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month