Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Applicant has not complied with one or more conditions for receiving the benefit of an earlier filing date under 35 U.S.C. 119(e) as follows:
The later-filed application must be an application for a patent for an invention which is also disclosed in the prior application (the parent or original nonprovisional application or provisional application). The disclosure of the invention in the parent application and in the later-filed application must be sufficient to comply with the requirements of 35 U.S.C. 112(a) or the first paragraph of pre-AIA 35 U.S.C. 112, except for the best mode requirement. See Transco Products, Inc. v. Performance Contracting, Inc., 38 F.3d 551, 32 USPQ2d 1077 (Fed. Cir. 1994).
The disclosure of the prior-filed applications, (a)-(d) listed below, fail to provide adequate support or enablement in the manner provided by 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph for any of the currently pending claims of this application.
Application No. 63/445879, filed 02/15/2023
Application No. 63/488042, filed 03/02/2023
Application No. 63/530418, filed 08/02/2023
Application No. 63/541659, filed 09/29/2023
Only the disclosure of the prior-filed provisional application with Application No. 63/524035, filed 06/29/2023, provides adequate support or enablement in the manner provided by 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph for any of the currently pending claims of this application. All currently pending claims of this application are considered to be entitled to the benefit of a filing date of 06/29/2023.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 7-8, 17-18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 7 recites the limitation “the vehicle model” in line 4. There is insufficient antecedent basis for this limitation in the claim.
If the element “the vehicle model” of line 4 is intended to refer to the element “a model of the vehicle seat” in line 2, examiner suggests amending the element of line 4 to recite "the model of the vehicle seat".
Claim 8 is dependent on claim 7 and therefore similarly rejected. Claims 17-18 recite similar elements and are therefore similarly rejected.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-5, 9, 11-15, 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hu (US 20210394710 A1) in view of Galan-Oliveras (US 20200017069 A1, hereinafter referred to as Galan).
Regarding claim 1, Hu discloses a computer-implemented method comprising: obtaining, by one or more processors (Fig. 4C, [0058], [0062] one or more processors 410), image data of a vehicle seat located within a vehicle (Fig. 1A, Fig. 3, [0156]-[0157] the device may receive data from a camera (i.e. images) monitoring the interior of a vehicle (e.g. a driver and seatbelt) where the image data includes specific points and a connecting point between a seatbelt and the vehicle; [0157] the device may also be used to determine whether child safety seats (i.e. vehicle seats) are properly installed);
inputting, by the one or more processors, the image data into a machine vision model, wherein the machine vision model is trained: a) using historical image data of vehicle seats within vehicles, wherein the historical image data is labeled to indicate whether a depicted vehicle seat is properly installed ([0163], [0033]-[0034] training a machine learning model based on images that are labeled with the locations of markers/points and whether the locations correspond to proper or improper positioning of a safety belt; [0157] the model may also be used to determine whether child safety seats are properly installed),
b) to learn a relationship between extracted features of the historical image data and a properness of an installation of a vehicle seat ([0033]-[0034] the model establishes a relationship between marker locations and the properness of the wearing of a seatbelt; [0157] the model may also be used to determine properness of child safety seat installation based on marker locations), and
c) to output a determination of a properness of an installation of a vehicle seat in response to detecting input image data ([0033] an output of whether a seatbelt is worn in the proper manner or in any improper manner in response to the images and marker locations; [0157] the device may determine whether a car seat is installed properly); and
presenting, by the one or more processors, an indication of the output of the machine vision model (Fig. 6, [0160] a visible or audible warning may be initiated in response to determining safety state (i.e. an indication of the output)).
Hu fails to discloses where the image data includes one or more connecting points of the vehicle seat to the vehicle.
Galan, in a related system from the same field of automatic machine vision to determine a state of a vehicle safety seat (Abstract), discloses where the image data includes one or more connecting points of the vehicle seat to the vehicle (Fig. 1 locking clip; Fig. 2 step 66; Fig. 3, [0017]-[0018] images to be used by automated vision system of a vehicle interior including a child restraint system CRS (i.e. vehicle seat), including connection of CRS to the vehicle such as a locking clip).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to combine Galan with Hu and include image data where the images include one or more connecting points of the vehicle seat to the vehicle, as disclosed by Galan, with a computer-implemented method for generating an indication of an output of a machine vision model for determining properness of installation of a vehicle seat, as disclosed by Hu, for the purpose of ensuring compliance with safety regulations for user of vehicle safety seats, which enhances the safety of children and passengers in a vehicle (See Galan: Abstract, [0002], [0011]).
Regarding claim 2, Hu in view of Galan discloses the computer-implemented method of claim 1 as applied above. Hu further discloses wherein the image data includes two or more images depicting the one or more connecting points from different orientations ([0154] the system may include stereoscopic cameras (i.e. cameras capturing an area from different angles)).
Regarding claim 3, Hu in view of Galan discloses the computer-implemented method of claim 1 as applied above. Hu further discloses wherein the output of the machine vision model includes a confidence in a labeling decision (Fig. 4C, [0072]-[0073] deep learning accelerator; [0087] the machine vision system may output a confidence measure in each classification decision).
Regarding claim 4, Hu in view of Galan discloses the computer-implemented method of claim 3 as applied above. Hu further discloses determining, via the one or more processors, that the confidence in the labeling decision is below a threshold value; and presenting, via the one or more processors, the image data to a reviewer to obtain a review decision of whether the vehicle seat is properly installed within the vehicle ([0087], [0131] when a confidence score does not meet a threshold, a supervisory MCU (i.e. reviewer) may review the decision and determine the appropriate outcome).
Regarding claim 5, Hu in view of Galan discloses the computer-implemented method of claim 4 as applied above. Hu further discloses comparing, by the one or more processors, the review decision to the labeling decision of the machine vision model; and retraining, by the one or more processors, the machine vision model based at least in part upon the comparison ([0130]-[0132] the reviewer (e.g. supervisory MCU) is trained to learn when certain results can be trusted based on information about the systems and the contexts of the results (i.e. the correct/review decision is compared to the model decision and the model is trained based on the comparison)).
Regarding claim 9, Hu in view of Galan discloses the computer-implemented method of claim 1 as applied above. Hu further discloses wherein the image data is captured by an image sensor communicatively coupled to the vehicle ([0156], [0028] a sensor such as a camera is coupled to the vehicle).
Regarding claim 11, Hu in view of Galan discloses everything claimed as applied above (See rejection of claim 1). Hu additionally discloses a computer system comprising: one or more processors; a non-transitory program memory coupled to the one or more processors and storing executable instructions that, when executed by the one or more processors, cause the computer system to ([0148], [0150] computer system including nonvolatile computer-storage media storing instructions which are executed by CPU/processors).
Regarding claim 12, Hu in view of Galan discloses everything claimed as applied above (See rejection of claim 2).
Regarding claim 13, Hu in view of Galan discloses everything claimed as applied above (See rejection of claim 3).
Regarding claim 14, Hu in view of Galan discloses everything claimed as applied above (See rejection of claim 4).
Regarding claim 15, Hu in view of Galan discloses everything claimed as applied above (See rejection of claim 5).
Regarding claim 19, Hu in view of Galan discloses everything claimed as applied above (See rejection of claim 9).
Regarding claim 20, Hu in view of Galan discloses everything claimed as applied above (See rejection of claim 1). Hu additionally discloses a tangible, non-transitory computer-readable medium storing executable instructions that, when executed by one or more processors of a computer system cause the computer system to execute a method ([0148], [0150] computer system including nonvolatile computer-storage media storing instructions which are executed by CPU/processors).
Claim(s) 6-7, 10, 16-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hu (US 20210394710 A1) in view of Galan-Oliveras (US 20200017069 A1, hereinafter referred to as Galan) in further view of Friedrichs (US 20240013556 A1, which was effectively filed 07/07/2022).
Regarding claim 6, Hu in view of Galan discloses the computer-implemented method of claim 1 as applied above. Hu fails to disclose wherein the machine vision model includes an object identification model trained to identify a presence of a vehicle seat in the input image data; and the output of the machine vision model includes an indication that no vehicle seat was detected when the object identification model does not detect the presence of a vehicle seat in the input image data.
Friedrichs, in a related system from the same field of endeavor of vehicle seat detection and classification (Abstract), discloses the machine vision model includes an object identification model trained to identify a presence of a vehicle seat in the input image data (Fig. 8, [0027], [0034] the system may recognize states where only the built-in vehicle seat is present, where different types of child seats are present, etc.); and
the output of the machine vision model includes an indication that no vehicle seat was detected when the object identification model does not detect the presence of a vehicle seat in the input image data (Fig. 3, [0027], [0036] a seat occupancy class, which may include an empty seat (i.e. built-in vehicle seat with no additional child or safety seat on it), is output).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to combine Friedrichs with Hu in view of Galan and identify presence of a vehicle seat and output an indication that no vehicle seat is detected, as disclosed by Friedrichs, as part of a computer-implemented method for generating an indication of an output of a machine vision model for determining properness of installation of a vehicle seat, as disclosed by Hu in view of Galan, for the purpose of improving determining appropriate safety measures and improving safety for passengers of a vehicle (See Friedrichs, [0002]-[0004], [0023]).
Regarding claim 7, Hu in view of Galan and Friedrichs discloses the computer-implemented method of claim 6 as applied above. Hu fails to disclose wherein the object identification model is trained to identify a model of the vehicle seat detected in the input image data; and the output of the machine vision model includes an indication of the vehicle model.
Galan, in a related system from the same field of automatic machine vision to determine a state of a vehicle safety seat (Abstract), discloses the object identification model is trained to identify a model of the vehicle seat detected in the input image data; and the output of the machine vision model includes an indication of the vehicle model ([0017], [0025] the system may determine and output the type of CRS (e.g. booster seat, car seat, rear-facing, etc.)).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to combine Galan with Hu in view of Galan and Friedrichs and identify and output indication of a model of a vehicle seat, as disclosed by Galan, as part of a computer-implemented method for generating an indication of an output of a machine vision model for determining properness of installation of a vehicle seat, as disclosed by Hu in view of Galan and Friedrichs, for the purpose of ensuring compliance with safety regulations for user of vehicle safety seats, which enhances the safety of children and passengers in a vehicle (See Galan: Abstract, [0002], [0011]).
Regarding claim 10, Hu in view of Galan discloses the computer-implemented method of claim 1 as applied above. Hu fails to disclose wherein the image data is captured by a mobile device of an individual associated with the vehicle.
Friedrichs, in a related system from the same field of endeavor of vehicle seat detection and classification (Abstract), discloses wherein the image data is captured by a mobile device of an individual associated with the vehicle ([0025] the image data may be captured by a driver's device which includes a camera).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to combine Friedrichs with Hu in view of Galan and capture image data using an associated individual's mobile device, as disclosed by Friedrichs, as part of a computer-implemented method for generating an indication of an output of a machine vision model for determining properness of installation of a vehicle seat, as disclosed by Hu in view of Galan, for the purpose of improving determining appropriate safety measures and improving safety for passengers of a vehicle (See Friedrichs, [0002]-[0004], [0023]).
Regarding claim 16, Hu in view of Galan and Friedrichs discloses everything claimed as applied above (See rejection of claim 6).
Regarding claim 17, Hu in view of Galan and Friedrichs discloses everything claimed as applied above (See rejection of claim 7).
Claim(s) 8, 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hu (US 20210394710 A1) in view of Galan-Oliveras (US 20200017069 A1, hereinafter referred to as Galan) in further view of Friedrichs (US 20240013556 A1, which was effectively filed 07/07/2022) in further view of Hasan (US 20220363168 A1).
Regarding claim 8, Hu in view of Galan and Friedrichs discloses the computer-implemented method of claim 7 as applied above. Hu fails to disclose obtaining, by the one or more processors, installation instructions associated with the model of the vehicle seat; and presenting, by the one or more processors, at least a portion of the installation instructions.
Hasan, in a related system from the same field of endeavor of guiding installation of vehicle seats (Abstract), discloses obtaining, by the one or more processors, installation instructions associated with the model of the vehicle seat; and presenting, by the one or more processors, at least a portion of the installation instructions ([0012], [0017] instructions for installing a specific vehicle seat and providing visual cues to a user indicating subsequent steps for installation).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to combine Hasan with Hu in view of Galan and Friedrichs and obtain and indicate to a user instructions for installing a vehicle seat, as disclosed by Hasan, as part of a computer-implemented method for generating an indication of an output of a machine vision model for determining properness of installation of a vehicle seat, as disclosed by Hu in view of Galan and Friedrichs, for the purpose of improving user accuracy in installation of a vehicle seat and thus improving passenger safety (See Hasan, [0003], [0012]).
Regarding claim 18, Hu in view of Galan, Friedrichs, and Hasan discloses everything claimed as applied above (See rejection of claim 8).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Kentley-Klay (US 11393238 B1) discloses vehicle seat safety monitoring based on a vehicle-interior imaging system, including determining whether a child is in a rear- or forward-facing vehicle seat.
Szakelyhidi (US 20130088058 A1) discloses a vehicle seat system including a system for determining whether the seat is installed properly and a built-in indicator indicating to a user whether the seat is properly installed in the vehicle.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CAROLINE DEPALMA whose telephone number is (571)270-0769. The examiner can normally be reached Mon-Thurs 9:00am-4pm Eastern Time.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Moyer can be reached at 571-272-9523. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CAROLINE E. DEPALMA/Examiner, Art Unit 2675
/ANDREW M MOYER/Supervisory Patent Examiner, Art Unit 2675