Prosecution Insights
Last updated: April 19, 2026
Application No. 18/632,759

METHOD FOR IDENTIFYING A SEAT OCCUPANCY IN A VEHICLE

Non-Final OA §101§102§103
Filed
Apr 11, 2024
Examiner
NWUHA, LOUIS TOCHUKWU ENE
Art Unit
2674
Tech Center
2600 — Communications
Assignee
Robert Bosch GmbH
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
11 currently pending
Career history
11
Total Applications
across all art units

Statute-Specific Performance

§101
8.7%
-31.3% vs TC avg
§103
78.3%
+38.3% vs TC avg
§102
13.0%
-27.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§101 §102 §103
Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION 2. The United States Patent & Trademark Office appreciates the application that is by the inventor/assignee. The United States Patent & Trademark Office reviewed the following application and has made the following comments below. Priority 3. Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. DE10 2023 204 206. 1, filed on 5/8/2023. Information Disclosure Statement 4. The information disclosure statement (IDS) was submitted on 4/11/2024. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 101 5. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 6. Claims 14 and 15 are rejected under 35 U.S.C 101 because the claimed invention is directed to non-statutory subject matter. When reviewing independent claim 14, and based upon consideration of all of the relevant factors with respect to the claim as a whole, claims 14 & 15 are held to claim an abstract idea without reciting elements that amount to significantly more than the abstract idea and is therefore rejected as ineligible subject matter under 35 U.S.C. 101. The Examiner will analyze Claim 14 and similar rationale applies to independent Claim 15. The rationale, under MPEP § 2106, for this finding is explained below: 7. The claimed invention (1) must be directed to one of the four statutory categories, and (2) must not be wholly directed to subject matter encompassing a judicially recognized exception, as defined below. The following two step analysis is used to evaluate these criteria. 8. Step 1: Is the claim directed to one of the four patent-eligible subject matter categories: process, machine, manufacture, or composition of matter? 9. When examining the claim under 35 U.S.C. 101, the Examiner interprets that the claims are related to a process since the claim is directed to a neural network autonomous vehicle training method. 10. Step 2A, Prong 1: Does the claim wholly embrace a judicially recognized exception, which includes laws of nature, physical phenomena, and abstract ideas, or is it a particular practical application of a judicial exception? 11. The Examiner interprets that the judicial exception applies since Claim 14 limitations of receiving monitoring data of a camera of a vehicle interior; and learning, for each pixel in a frame, to which components of the vehicle the pixel is assigned with detail-by-detail classification are directed to the abstract idea of mental processes. The claim is related to mental processes relationship by stating receiving monitoring data, and learning, for each pixel in frame, to which component of the vehicle the pixel is assigned. The Claim 14 limitation of the receiving data of the interior of the vehicle and Learning, for each pixel in frame, to which component of the vehicle the pixel is assigned is clearly an abstract idea and can be entirely performed in the mind (data analysis). The claim recites these steps as being performed for training a neural network that will assign pixels for vehicle occupancy detection, but the steps themselves are not tied to any specific implementation of such training in a specific machine. The "for training" limitations in the preamble are statements of purpose of the method, and the steps themselves are not tied to machines performing the training. For example, the step of learning for each pixel to which vehicle component the pixel is assigned, even in the context of training a neural network could be performed by a human mentally performing the analysis as a task to determine solutions with which to check the training of the neural network, before, during or after the training, and entirely disconnected to the machine performing neural network training. 12. Step 2A, Prong 2: Does the claim recite additional elements that integrate the judicial exception into a practical application. 13. The Examiner interprets that Claim 14 limitations do not provide additional elements or combination of additional elements to a practical application since the claims are adding insignificant extra-solution activity to the judicial exception. See, MPEP §2106.04(a), Because a judicial exception is not eligible subject matter, Bilski, 561 U.S. at 601, 95 USPQ2d at 1005-06 (quoting Chakrabarty, 447 U.S. at 309, 206 USPQ at 197 (1980)), if there are no additional claim elements besides the judicial exception, or if the additional claim elements merely recite another judicial exception, that is insufficient to integrate the judicial exception into a practical application. See, e.g., RecogniCorp, LLC v. Nintendo Co., 855 F.3d 1322, 1327, 122 USPQ2d 1377 (Fed. Cir. 2017) ("Adding one abstract idea (mental processes) to another abstract idea (identifying and learning) does not render the claim non-abstract"). 14. Step 2B: If a judicial exception into a practical application is not recited in the claim, the Examiner must interpret if the claim recites additional elements that amount to significantly more than the judicial exception. 15. The Examiner interprets that the Claims do not amount to significantly more since the Claims recite simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high-level of generality, to the judicial exception, e.g., a claim to an abstract idea requiring no more than a generic computer to perform generic computer functions that are well-understood, routine and conventional activities previously to the industry, as discussed in Alice Corp., 573 U.S. at 225, 110 USPQ2d at 1984. 16. Furthermore, the generic computer components of the computer system recited as performing generic computer function that are well-understood, routine and conventional activities amount to no more than implementing the abstract idea with a computerized system. Claim 15 recites the same abstract idea where learning poses of a plurality of persons is data analysis capable of being performed entirely in the human mind, receiving camera data is extra solution activity, the same "for training" is considered not to limit the steps to being implemented in a machine and therefore is not drawn to the eligible subject matter as they are directed to the abstract idea without significantly more. Therefore, the Examiner interprets that the clams are rejected under 35 U.S.C. 101. Claim Rejections - 35 USC § 102 17. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. 18. Claim 15 is rejected under 35 U.S.C. 102(a)(2) as being unpatentable over Gronau et al. (US Patent Pub. No. US 2022/0114817 A1, hereafter referred to as Gronau). 19. Regard Claim 15, Gronau teaches a method for training a second neural network to recognize a pose of a person (paragraphs 46-48, 115, 155, 165-166, and 228, Gronau teaches using one or more trained neural networks for the detection and measuring of people, specifically their face, hands, and torso, using object pose data, as well as training neural networks to output a number of points such as a predefined number of points corresponding to certain body parts or a skeleton of lines, and detecting human contour using well-known detection or recognition methods.) for use in a method for identifying a seat occupancy in a vehicle (Fig. 4, 8A, 9A, and 9B, paragraphs 93, 102-103, 208, 257-258, and 277-278, Gronau teaches a system for identifying the number of passengers in the vehicle by using their position and location, as well as monitoring the interior of a passenger compartment in a vehicle, in other words detecting occupancy state.), the method comprising the following steps: receiving monitoring data of a camera (Fig. 2B, and 8A, paragraphs 153 and 177-179, Gronau teaches a monitoring system inside the vehicle cabin that has a video camera, to obtain data using sensors for analyzing identified objects inside the vehicle cabin.); and learning poses of a plurality of persons (paragraph 68, Gronau teaches a method for detecting the number of occupants in a vehicle having an interior passenger compartment by applying one or more pose detection algorithms on said one or more images to yield one or more skeleton models, respectively, for each or one or more occupants of said occupants and the generation of a prediction model based on one or more skeleton models to detect the number of occupants in said vehicle.). Claim Rejections - 35 USC § 103 20. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 21. Claims 1-7, 8, and 11-14 are rejected under 35 U.S.C 103(a) as being unpatentable over Gronau et al. (US Patent Pub. No. US 2022/0114817 A1, hereafter referred to as Gronau) in view of Breed (US Patent Pub. No. US 2005/0131607 A1, hereafter referred to as Breed 2005a). [AltContent: arrow][AltContent: arrow][AltContent: arrow][AltContent: arrow][AltContent: arrow]22. Regarding Claim 1, Gronau teaches a method for identifying a seat occupancy in a vehicle (Fig. 4, 8A, 9A, and 9B, paragraphs 93, 102-103, 208, 257-258, and 277-278, Gronau teaches a system for identifying the number of passengers in the vehicle by using their position and location, as well as monitoring the interior of a passenger compartment in a vehicle, in other words detecting occupancy state.), PNG media_image1.png 732 504 media_image1.png Greyscale PNG media_image2.png 716 354 media_image2.png Greyscale [AltContent: connector][AltContent: connector][AltContent: connector][AltContent: connector][AltContent: connector][AltContent: connector][AltContent: connector][AltContent: connector][AltContent: connector][AltContent: connector][AltContent: connector] PNG media_image3.png 576 508 media_image3.png Greyscale PNG media_image4.png 662 500 media_image4.png Greyscale comprising the following steps: receiving monitoring data of a camera of a vehicle interior (Fig. 2B, and 8A, paragraphs 153 and 177-179, Gronau teaches a monitoring system inside the vehicle cabin that has a video camera, to obtain data using sensors for analyzing identified objects inside the vehicle cabin.), PNG media_image5.png 712 564 media_image5.png Greyscale PNG media_image6.png 716 366 media_image6.png Greyscale recognizing one or more persons in the monitoring data using a second neural network for recognizing a pose of a person (paragraphs 46-48, 115, 155, 165-166, and 228, Gronau teaches using one or more trained neural networks for the detection and measuring of people, specifically their face, hands, and torso, using object pose data, as well as training neural networks to output a number of points such as a predefined number of points corresponding to certain body parts or a skeleton of lines, and detecting human contour using well-known detection or recognition methods.); and merging the assigned pixels to the components of the vehicle with the recognized one or more persons, for identifying a seat occupancy in the vehicle (Fig. 9B, paragraphs 66-72, 92, 51-52, 265-266, Gronau teaches a fusing algorithm comprising two or more types of data inputs and applying a pose detection algorithm on each of the obtained 2D images to yield at least one skeleton representation of the one or more detected occupants to detect occupancy state of the vehicle using one or more features of the occupants.). Gronau does not teach assigning pixels of the monitoring data to components of the vehicle by semantic segmentation using a first neural network. Breed 2005a is in the same field of art of obtaining information about seat occupancy in an automotive vehicle. Further, Breed 2005a teaches assigning pixels of the monitoring data to components of the vehicle by semantic segmentation using a first neural network (paragraph 461, 674-675, Breed 2005a teaches the use of multiple frequencies with ultrasound to change a static system allowing vehicle occupants to be tracked during pre-crash braking, where the color of the skin of an occupant is a reliable measure of the presence of an occupant that renders the segmentation of the image to be more accomplished, which is an example of semantic segmentation, and the determination is made by a first neural network whether the object is of a type requiring deployment of the occupant restraint device in the event of a crash involving the vehicle based on the waves received by at least some of the transducers after being modified by passing through the passenger compartment.). Therefore it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Gronau by incorporating the method of multiple frequencies with ultrasound to change a static system for vehicle occupants to be tracked that is taught by Breed 2005a to make an invention that can automatically assign pixels within images to specific features within the interior vehicle compartments; thus one of ordinary skill in the art would be motivated to combine the references since there is a need for more simple system that minimizes the amount of data stored and initially processed when diagnosing the state of a vehicle with respect to its stability, proper running and operating conditions (paragraph 91-96, Breed 2005a). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date. 23. In regards to Claim 2, Gronau in view of Breed 2005a teaches wherein, in the step of recognizing the one or more persons in the monitoring data, a two-dimensional pose of a person or a three-dimensional pose of the person is recognized (Fig. 9A, paragraphs 61-63, Gronau teaches detecting the number of occupants in a vehicle having an interior passenger compartment using a pose detection algorithm to obtain 2D poses from obtained sequences of 2D images to yield a 2D skeleton representation of one or more occupants.). 24. In regards to Claim 3, Gronau in view of Breed 2005a teaches wherein, in the step of recognizing one or more persons in the monitoring data, a size of the one or more persons is recognized (paragraph 225, 253, Gronau teaches based on image data and pose detections, one or more measurable characteristics of objects in the vehicle such as the occupant’s height, weight, mass, and/or body portions size such as leg or arm length or width, head diameter, skin color, and other characteristics are recognized and estimated.). 25. In regards to Claim 4, Gronau in view of Breed 2005a teaches wherein, in the step of recognizing one or more persons in the monitoring data, articulation points of the one or more persons are recognized (paragraphs 165-166, Gronau teaches a predefined number of points that connect distinct parts of objects such as human body parts as well as a correspondence to certain body parts or a skeleton of lines, which is an example of articulation points, to detect person such as human contour, specifically body parts may be detected to detect body pose.). 26. In regards to Claim 5, Gronau in view of Breed 2005a teaches wherein, the first neural network for semantic segmentation is a trained neural network (paragraph 674-677 and 711, Breed 2005a teaches that the first neural network is trained on signals from at least some of the transducers representative of waves received by the transducers when different objects are situated in the passenger compartment.). 27. In regards to Claim 6, Gronau in view of Breed 2005a teaches wherein, the second neural network for recognizing a pose of a person is a trained neural network (paragraphs 46-48, 115, 155, 165-166, and 228, Gronau teaches using one or more trained neural networks for the detection and measuring of people, specifically their face, hands, and torso, using object pose data, as well as training neural networks to output a number of points such as a predefined number of points corresponding to certain body parts or a skeleton of lines, and detecting human contour using well-known detection or recognition methods.). 28. In regards to Claim 7, Gronau in view of Breed 2005a teaches wherein, in the step of assigning pixels, a pixel-precise assignment of the monitoring data to components of the vehicle takes place per frame (paragraph 283, 477, and 701, Breed 2005a teaches a phased array system for precise distance measurements and mapping of the components of the passenger compartment as well as pixel tracking in order to know the precise position of the driver’s head and chest in the identification process for automobile occupancy, which is an example of pixel-precise assignment.). 29. In regards to Claim 9, Gronau in view of Breed 2005a teaches wherein, in the step of assigning pixels, pixels are assigned to a door of the vehicle which is arranged in front of a person (Fig. 8a, paragraph 304, 307, 346, Breed 2005a teaches transducers acting as transmitters and detectors as well as receivers for the pixel assignment and image generation of the presence of a person in the recognition of a door opening, where the transmitters, detectors, and receivers are mounted/arranged above the front passenger side door, driver’s side door, near the dome light, and in the center headliner.) PNG media_image7.png 453 610 media_image7.png Greyscale wherein, in the step of merging, it is recognized that the person is arranged behind the door outside the vehicle (paragraph 615-620, Breed 2005a teaches the use of color and natural light multispectral imaging, which uses the assignment of pixels with assigning values to each pixel based on the wavelengths of light captured by the sensors such as mid infrared, motion, and ultrasonic sensors, to be used in recognizing objects inside and outside of a vehicle, which would include recognizing a person inside and outside a vehicle.). 30. In regards to Claim 11, Gronau in view of Breed 2005a teaches wherein, in the step of assigning pixels, the pixels can be assigned to a vehicle seat pixel class (paragraph 339, 461-462, Breed 2005a teaches a pattern recognition algorithm system being used to classify the occupancy of a seat into a variety of classes such as an empty seat, infant seat, a child in or out of position, and an adult in or out of position, where color in the images are used when available, and the distinguishability of two objects when observed in color or with illumination from other parts of the electromagnetic spectrum are used, which are examples of classes.), for recognizing which vehicle seat the pixels are assigned to (paragraph 339, 461-462, Breed 2005a teaches a pattern recognition algorithm system being used to classify the occupancy of a seat into a variety of classes such as an empty seat, infant seat, a child in or out of position, and an adult in or out of position, as well as the distinguishability of two objects when observed in color or with illumination from other parts of the electromagnetic spectrum.). 31. Regarding Claim 12, Gronau teaches a system for identifying a seat occupancy in a vehicle (Fig. 4, 8A, 9A, and 9B, paragraphs 93, 102-103, 208, 257-258, and 277-278, Gronau teaches a system for identifying the number of passengers in the vehicle by using their position and location, as well as monitoring the interior of a passenger compartment in a vehicle, in other words detecting occupancy state.), wherein, the system is configured to: receive monitoring data of a camera of a vehicle interior (Fig. 2B, and 8A, paragraphs 153 and 177-179, Gronau teaches a monitoring system inside the vehicle cabin that has a video camera, to obtain data using sensors for analyzing identified objects inside the vehicle cabin); recognize one or more persons in the monitoring data using a second neural network for recognizing a pose of a person (paragraphs 46-48, 115, 155, 165-166, and 228, Gronau teaches using one or more trained neural networks for the detection and measuring of people, specifically their face, hands, and torso, using object pose data, as well as training neural networks to output a number of points such as a predefined number of points corresponding to certain body parts or a skeleton of lines, and detecting human contour using well-known detection or recognition methods.); and merge the assigned pixels to the components of the vehicle with the recognized one or more persons, for identifying a seat occupancy in the vehicle (Fig. 9B, paragraphs 66-72, 92, 51-52, 265-266, Gronau teaches a fusing algorithm comprising two or more types of data inputs and applying a pose detection algorithm on each of the obtained 2D images to yield at least one skeleton representation of the one or more detected occupants to detect occupancy state of the vehicle using one or more features of the occupants.). Gronau does not teach to assign pixels of the monitoring data to components of the vehicle by semantic segmentation using a first neural network. Breed 2005a is in the same field of art of obtaining information about seat occupancy in an automotive vehicle. Further, Breed 2005a teaches to assign pixels of the monitoring data to components of the vehicle by semantic segmentation using a first neural network (paragraph 461, 674-675, Breed 2005a teaches the use of multiple frequencies with ultrasound to change a static system allowing vehicle occupants to be tracked during pre-crash braking, where the color of the skin of an occupant is a reliable measure of the presence of an occupant that renders the segmentation of the image to be more accomplished, which is an example of semantic segmentation, and the determination is made by a first neural network whether the object is of a type requiring deployment of the occupant restraint device in the event of a crash involving the vehicle based on the waves received by at least some of the transducers after being modified by passing through the passenger compartment.). Therefore it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Gronau by incorporating a system that uses multiple frequencies with ultrasound to track vehicle occupants that is taught by Breed 2005ato make an invention that can automatically assign pixels within images to specific features within the interior vehicle compartments; thus one of ordinary skill in the art would be motivated to combine the references since there is a need to minimize the amount of data stored and processed when evaluating the state of a vehicle with respect to its stability, proper running and operating conditions (paragraph 91-96, Breed 2005a). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date. [AltContent: arrow][AltContent: arrow]32. In regards to Claim 13, Gronau in view of Breed 2005a teaches a camera configured to record the monitoring data of the vehicle interior (Fig. 2B, and 7, paragraphs 129, 153, 177-179, 209, and 224, Gronau teaches a video camera within the monitoring system inside the vehicle cabin to obtain video image data, such as two video images, for analyzing identified objects inside the vehicle cabin.). PNG media_image8.png 672 674 media_image8.png Greyscale 33. Regarding Claim 14, Gronau teaches using a method for identifying a seat occupancy in a vehicle (Fig. 4, 8A, 9A, and 9B, paragraphs 93, 102-103, 208, 257-258, and 277-278, Gronau teaches a system for identifying the number of passengers in the vehicle by using their position and location, as well as monitoring the interior of a passenger compartment in a vehicle, in other words detecting occupancy state.), the method for training comprising the following steps: receiving monitoring data of a camera of a vehicle interior (Fig. 2B, and 8A, paragraphs 153 and 177-179, Gronau teaches a monitoring system inside the vehicle cabin that has a video camera, to obtain data using sensors for analyzing identified objects inside the vehicle cabin.) and learning, for each pixel in a frame, to which component of the vehicle the pixel is assigned (paragraphs 195-197, 244-245, 254, 257, and 294, Gronau teaches identified object values being obtained based on information data from the sensor that outputs the range to each pixel or sample in the image, which would be an example of learning for each frame.). Gronau does not teach a method for training a first neural network to assign pixels of monitoring data to components of the vehicle by semantic segmentation. Breed 2005a is in the same field of art of obtaining information about seat occupancy in an automotive vehicle. Further, Breed 2005a teaches a method for training a first neural network to assign pixels of monitoring data to components of the vehicle by semantic segmentation (paragraph 461, 674-675, Breed 2005a teaches the use of multiple frequencies with ultrasound to change a static system allowing vehicle occupants to be tracked during pre-crash braking, where the color of the skin of an occupant is a reliable measure of the presence of an occupant that renders the segmentation of the image to be more accomplished, which is an example of semantic segmentation, and the determination is made by a first neural network whether the object is of a type requiring deployment of the occupant restraint device in the event of a crash involving the vehicle based on the waves received by at least some of the transducers after being modified by passing through the passenger compartment.). Therefore it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Gronau by incorporating the method of multiple frequencies with ultrasound to change a static system for vehicle occupants to be tracked that is taught by Breed 2005a to make an invention that can automatically assign pixels within images to specific features within the interior vehicle compartments; thus one of ordinary skill in the art would be motivated to combine the references since there is a need for more simple system that minimizes the amount of data stored and initially processed when diagnosing the state of a vehicle with respect to its stability, proper running and operating conditions (paragraph 91-96, Breed 2005a). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date. 34. Claims 8 is rejected under 35 U.S.C 103(a) as being unpatentable over Gronau et al. (US Patent Pub. No. US 2022/0114817 A1, hereafter referred to as Gronau) in view of Breed (US Patent Pub. No. US 2005/0131607 A1, hereafter referred to as Breed 2005a) in further view of Breed et al. (US Patent Pub. No. US 2009/066065 A1, hereafter referred to as Breed 2009a) 35. Regarding Claim 8, Gronau in view of Breed 2005a teaches the method the method of Claim 1 for identifying a seat occupancy in a vehicle. Gronau in view of Breed 2005a does not teach wherein, in the step of assigning pixels, pixels are assigned to one or more components of the vehicle which are arranged in front of a person, wherein, in the step of merging, it is recognized that the person is arranged behind the one or more components of the vehicle. Breed 2009a is in the same field of art of obtaining information about seat occupancy in an automotive vehicle. Further Breed 2009a teaches wherein, in the step of assigning pixels, pixels are assigned to one or more components of the vehicle which are arranged in front of a person (Fig. 16-17, paragraph 19, 167, Breed 2009b teaches using a first image receiver arranged at a first location for obtaining a first two-dimensional view of a portion of a vehicle compartment including the passenger compartment behind the driver’s seat and a second image receiver arranged at a second location for obtaining a second two-dimensional view of the same portion of the compartment, where the receivers are pixel cameras such as CMOS dynamic and an active pixel cameras.), PNG media_image9.png 456 353 media_image9.png Greyscale wherein, in the step of merging, it is recognized that the person is arranged behind the one or more components of the vehicle (paragraph 19, 306-314, Breed 2009b teaches the use of a high dynamic range, HDRC camera for the recognition of passengers in the passenger compartment of vehicle, which is an example of pixel merging for recognition.). Therefore it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Gronau and Breed 2005a by incorporating the method of recognizing a person behind a vehicle component such as the front row of vehicle seats that is taught by Breed 2009a to make an invention that can identify passengers in a vehicle when they are in an obscured view; thus one of ordinary skill in the art would be motivated to combine the references since there is a need to know who is present in vehicles that are involved in accidents (paragraph 292, Breed 2005a). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date. Allowable Subject Matter 36. Claims 10 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claim. Regarding Claim 10, no prior art teaches a method for identifying a seat occupancy in a vehicle, wherein, in the step of assigning pixels, pixels are assigned to a vehicle seat which is arranged behind a person, wherein, in the step of merging, it is recognized that the vehicle seat is arranged behind the person and the person is arranged in the vehicle seat. Conclusion 37. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LOUIS NWUHA whose telephone number is (571)272 -0219. The examiner can normally be reached Monday to Friday 8 am to 5 pm. 38. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. 39. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Oneal Mistry can be reached at 3134464912. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. 40. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LOUIS NWUHA/Examiner, Art Unit 2674 /ONEAL R MISTRY/Supervisory Patent Examiner, Art Unit 2674
Read full office action

Prosecution Timeline

Apr 11, 2024
Application Filed
Mar 06, 2026
Non-Final Rejection — §101, §102, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month