DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
Claims 1-7, 9-13, and 15-19 are pending in this application.
Claims 1, 12, 13, and 17 are presented as currently amended claims.
No claims are newly presented.
Claim 8 is newly cancelled.
Continued Examination
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on December 9, 2025 has been entered.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that use the word “unit” but are nonetheless not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph because the claim limitation(s) recite(s) sufficient structure, materials, or acts to entirely perform the recited function. Such claim limitation(s) are: (1) “sensor unit” in claim 1-7, 9-13, and 15-19 and (2) “control unit” in claims 12-13 and “computing unit” in claim 1 and 12-13.
Because these claim limitations are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph they are not being interpreted to cover only the corresponding structure, material, or acts described in the specification as performing the claimed function, and equivalents thereof.
If applicant intends to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to remove the structure, materials, or acts that performs the claimed function; or (2) present a sufficient showing that the claim limitation(s) does not recite sufficient structure, materials, or acts to perform the claimed function.
Claim Rejections - 35 USC § 103
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claims 1-2, 4-7, 9-13, and 15-19 are rejected under 35 U.S.C. 103 as being unpatentable over McBride (US 20180022356 A1) in view of Breed et al. (US 20020116106 A1) (collectively referenced as combination McBride hereinafter). As regards the individual claims:
Regarding claim 1, McBride teaches a method for detecting objects in a first vehicle detection field, which comprises:
an interior and an exterior region of a vehicle, by means of a first sensor unit of the vehicle, (McBride: ¶ 008; LIDAR sensor can be arranged to have a field of view that includes interior and exterior portions of the vehicle) the method comprising: generating first detection data of the first detection field by detecting a first object in the exterior region of the first vehicle detection field with the first sensor unit; (McBride: ¶ 023; include data indicating distance, size, height of an object in the vehicle 105 surrounding area, e.g., other vehicles, road structures, pedestrians,) generating second detection data of the first detection field by detecting a second object in the interior of the first vehicle detection field with the first sensor unit; and (McBride: ¶ 022; generating 3D model of the vehicle interior) processing the first detection data and the second detection data of the first vehicle detection field in a computing unit, (McBride: ¶ 021; computing device 103 could combine data from the LIDAR sensor(s) 101 and other vehicle 105 sensors, e.g., vision cameras, infrared cameras) . . . wherein the first vehicle detection field comprises a detection cone comprising at least a portion of the exterior region disposed in front of the vehicle. (McBride: ¶ 023; generate a 3D map of vehicle 105 surrounding area encompassed by the vehicle exterior) controlling a braking system of an autonomous vehicle based on the first detection data and the second detection data, thereby causing the autonomous vehicle to have a reduced speed prior to reaching a location of the first object; (McBride: ¶ 036; the computing device 103 may use a data indicating a distance of the vehicle 105 to an object to determine whether the vehicle 105 brake should be actuated.) and arranging the first sensor unit such that the first vehicle detection field of the first sensor unit extends from a first vehicle side, at least across the interior, transversely to a longitudinal axis of the vehicle, to an opposite, second vehicle side into the exterior region. (McBride: Fig. 2B; [301]) (McBride: ¶ 018; As illustrated in FIG. 2B, the different sensors 101 may generate different respective fields of view that may overlap, e.g., cover a same portion of a vehicle 105 interior 205. Typically, each of the fields of view 301a, 301b may have a detection range of 200 meters)
McBride is either silent or does not explicitly teach:
wherein the first sensor unit comprises a camera arranged on a pillar of the vehicle in the interior of the vehicle; however, Breed does teach:
wherein the first sensor unit comprises an optical sensor arranged on a pillar of the vehicle in the interior of the vehicle, (Breed: ¶ 093; camera can be arranged . . . in an A-pillar or B-pillar of the vehicle to obtain images of an interior environment of the vehicle) (Breed: ¶ 168; also suited for monitoring the environment outside of the vehicle for purposes of blind spot detection)
Before the effective filling date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the teachings of Breed with the teachings of McBride because doing so would result in the predicable benefit of better detection of occupants would improve airbag systems. (Breed: ¶ 011).
Regarding claim 2, as detailed above, combination McBride teaches the invention as detailed with respect to claim 1. McBride teaches:
the method further comprising detecting objects in a second vehicle detection field, which comprises an interior region of the vehicle (McBride: ¶ 021; computing device 103 could combine data from the LIDAR sensor(s) 101 and other vehicle 105 sensors, e.g., vision cameras, infrared cameras, etc., to determine an occupant state. For example, the computing device 103 could compare sensor data from LIDAR sensor(s) 101 and camera image data, and determine the vehicle occupant state.) and an exterior region of the vehicle, by at least one second sensor unit of the vehicle, (McBride: ¶ 023; include data indicating distance, size, height of an object in the vehicle 105 surrounding area, e.g., other vehicles, road structures, pedestrians,)
And Breed teaches:
wherein detecting the objects in the second vehicle detection field comprises: generating first detection data of the second detection field by detecting a first object in the exterior region of the second vehicle detection field with the second sensor unit; (Breed: ¶ 168; also suited for monitoring the environment outside of the vehicle for purposes of blind spot detection) and generating second detection data of the second detection field by detecting a second object in the interior of the second vehicle detection field with the second sensor unit, (Breed: ¶ 093; camera can be arranged . . . in an A-pillar or B-pillar of the vehicle to obtain images of an interior environment of the vehicle) wherein the second sensor unit comprises an optical sensor arranged on a pillar of the vehicle opposite the first sensor unit in the interior of the vehicle. (Breed: ¶ 080; system accuracy and permits the location of body parts of the occupant to be determined)
Regarding claim 4, as detailed above, combination McBride teaches the invention as detailed with respect to claim 1. Breed further teaches:
wherein the first object in the exterior region is in a blind-spot region of the vehicle. (Breed: ¶ 168; also suited for monitoring the environment outside of the vehicle for purposes of blind spot detection)
Regarding claim 5, as detailed above, combination McBride teaches the invention as detailed with respect to claim 1. McBride further teaches:
wherein the second object in the interior is an occupant of the vehicle. (McBride: ¶ 021; computing device 103 could compare sensor data from LIDAR sensor(s) 101 and camera image data, and determine the vehicle occupant state.)
Regarding claim 6, as detailed above, combination McBride teaches the invention as detailed with respect to claim 1. Breed further teaches:
wherein the wherein the first sensor unit is arranged on an A-pillar of the vehicle in the interior of the vehicle. (Breed: ¶ 241; camera can be arranged in various locations in the vehicle including in a headliner, roof, ceiling, an A-pillar, a B-pillar and a C-pillar.) (Breed: Fig. 2A & 5; [110])
Regarding claim 7, as detailed above, combination McBride teaches the invention as detailed with respect to claim 1. Breed further teaches:
wherein the first sensor unit is arranged on a B-pillar or C-pillar of the vehicle in the interior of the vehicle. (Breed: ¶ 241; camera can be arranged in various locations in the vehicle including in a headliner, roof, ceiling, an A-pillar, a B-pillar and a C-pillar.)
Regarding claim 9, as detailed above, combination McBride teaches the invention as detailed with respect to claim 1. McBride further teaches:
wherein the first sensor unit has a vehicle detection field which, starting from a respective sensor unit, forms a horizontal detection cone of at least 110 degrees. (McBride: Fig. 2B; [301])
Regarding claim 10, as detailed above, combination McBride teaches the invention as detailed with respect to claim 1. McBride further teaches:
wherein further comprising: providing a driving assistance function based on the first detection data and the second detection data by way of a driving assistance system. (McBride: ¶ 008; can use such data for various purposes, e.g., collision detection and avoidance, automatic cruise control)
Regarding claim 11, as detailed above, combination McBride teaches the invention as detailed with respect to claim 1. McBride further teaches:
wherein the first sensor unit comprises an optical sensor comprising a camera or a laser scanner. (McBride: ¶ 021; computing device 103 could combine data from the LIDAR sensor(s) 101 and other vehicle 105 sensors, e.g., vision cameras, infrared cameras)
Regarding claim 12, McBride teaches a detection system for:
detecting objects in a first vehicle detection field, which comprises an interior and an exterior region of a vehicle, comprising: (McBride: ¶ 008; LIDAR sensor can be arranged to have a field of view that includes interior and exterior portions of the vehicle) a first sensor unit for: generating first detection data of the first detection field by detecting a first object in the exterior region of the first vehicle detection field; (McBride: ¶ 023; include data indicating distance, size, height of an object in the vehicle 105 surrounding area, e.g., other vehicles, road structures, pedestrians,) and generating second detection data of the first detection field by detecting a second object in the interior of the first vehicle detection field; (McBride: ¶ 022; generating 3D model of the vehicle interior) a computing unit for processing the first detection data and the second detection data of the first vehicle detection field, (McBride: ¶ 021; computing device 103 could combine data from the LIDAR sensor(s) 101 and other vehicle 105 sensors, e.g., vision cameras, infrared cameras) and a control unit configured to control a braking system of an autonomous vehicle based on the first detection data and the second detection data, thereby causing the autonomous vehicle to have a reduced speed prior to reaching a location of the first object, (McBride: ¶ 036; the computing device 103 may use a data indicating a distance of the vehicle 105 to an object to determine whether the vehicle 105 brake should be actuated.) . . . wherein the first vehicle detection field comprises a detection cone comprising at least a portion of the exterior region disposed in front of the vehicle. (McBride: ¶ 023; generate a 3D map of vehicle 105 surrounding area encompassed by the vehicle exterior)
wherein the first sensor unit is arranged such that the first vehicle detection field of the first sensor unit extends from a first vehicle side, at least across the interior, transversely to a longitudinal axis of the vehicle, to an opposite, second vehicle side into the exterior region. (McBride: Fig. 2B; [301]) (McBride: ¶ 018; As illustrated in FIG. 2B, the different sensors 101 may generate different respective fields of view that may overlap, e.g., cover a same portion of a vehicle 105 interior 205. Typically, each of the fields of view 301a, 301b may have a detection range of 200 meters)
McBride does not explicitly teach:
wherein the first sensor unit comprises a camera arranged on a pillar of the vehicle in the interior of the vehicle, and; however, Breed does teach:
wherein the first sensor unit comprises a camera arranged on a pillar of the vehicle in the interior of the vehicle, and (Breed: ¶ 093; camera can be arranged . . . in an A-pillar or B-pillar of the vehicle to obtain images of an interior environment of the vehicle) (Breed: ¶ 168; also suited for monitoring the environment outside of the vehicle for purposes of blind spot detection)
Before the effective filling date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the teachings of Breed with the teachings of McBride because doing so would result in the predicable benefit of better detection of occupants would improve airbag systems. (Breed: ¶ 011).
Regarding claim 13, as detailed above, combination McBride teaches the invention as detailed with respect to claim 12. McBride further teaches:
where the control unit is further configured to provide the driving assistance function based on the first detection data and the second detection data of the first vehicle detection field. (McBride: ¶ 008; can use such data for various purposes, e.g., collision detection and avoidance, automatic cruise control) (McBride: ¶ 036; the computing device 103 may use a data indicating a distance of the vehicle 105 to an object to determine whether the vehicle 105 brake should be actuated.)
Regarding claim 15, as detailed above, combination McBride teaches the invention as detailed with respect to claim 2. Breed further teaches:
wherein the second sensor unit is arranged on an A- pillar of the vehicle in the interior of the vehicle. (Breed: ¶ 241; camera can be arranged in various locations in the vehicle including in a headliner, roof, ceiling, an A-pillar, a B-pillar and a C-pillar.) (Breed: Fig. 2A & 5; [110])
Regarding claim 16, as detailed above, combination McBride teaches the invention as detailed with respect to claim 2. Breed further teaches:
wherein the second sensor unit is arranged on a B-pillar or C-pillar of the vehicle in the interior of the vehicle. (Breed: ¶ 241; camera can be arranged in various locations in the vehicle including in a headliner, roof, ceiling, an A-pillar, a B-pillar and a C-pillar.)
Regarding claim 17, as detailed above, combination McBride teaches the invention as detailed with respect to claim 2. McBride further teaches:
wherein the second sensor unit is arranged in a manner such that the second vehicle detection field of the second sensor unit extends from a second vehicle side, at least across the interior, transversely to the longitudinal axis of the vehicle, to an opposite, first vehicle side into the exterior region. (McBride: Fig. 2B; [301])
Regarding claim 18, as detailed above, combination McBride teaches the invention as detailed with respect to claim 2. McBride further teaches:
wherein the second sensor unit has a vehicle detection field which, starting from a respective sensor unit, forms a horizontal detection cone of at least 110 degrees. (McBride: Fig. 2B; [301])
Regarding claim 19, as detailed above, combination McBride teaches the invention as detailed with respect to claim 2. McBride further teaches:
wherein the second sensor unit comprises an optical sensor comprising a camera or a laser scanner. (McBride: ¶ 021; computing device 103 could combine data from the LIDAR sensor(s) 101 and other vehicle 105 sensors, e.g., vision cameras, infrared cameras)
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over combination McBride as applied to claim 2 above, and further in view of Raab (US 20160200254 A1).
Regarding claim 3, as detailed above, combination McBride teaches the invention as detailed with respect to claim 2. Combination McBride does not explicitly teach:
wherein the first sensor unit and the second sensor unit are arranged opposite one another on two pillars of a same type, of the vehicle.; however, Raab does teach:
wherein the first sensor unit and the second sensor unit are arranged opposite one another on two pillars of a same type, of the vehicle. (Raab: ¶ 064; vehicle 216 has an A pillar 202, a B pillar 204, a C pillar 206, and a D pillar 208. Each pillar may have mounted thereon one or more cameras 110, 112 and 114 respectively.).
Before the effective filling date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the teachings of Raab with the teachings of McBride because doing so would result in the predicable benefit of elimination of blind spots which would improve safety (Raab: ¶ 76).
Response to Arguments
Applicant's remarks filed December 9, 2025 have been fully considered.
Applicant’s argument and amendments with respect to the previous applied 35 U.S.C. § 112(b) rejection is persuasive and the rejection is hereby withdrawn.
Applicant’s argument and amendments with respect to the previous applied 35 U.S.C. § 101 rejection for subject matter edibility is persuasive and the rejection is hereby withdrawn.
Applicant's arguments filed December 9, 2025 have been fully considered but they are not persuasive. Applicant argues that newly amended independent claim 1 now:
encompasses the subject matter of the now-canceled dependent claim 8. With respect to dependent claim 8, Figure 2B of McBride is referenced in the Office Action as depicting that a LiDAR sensor 101 has a field of view 301. Id., page 11. However, as shown in Figure 2B, the field of view 301 of a particular LiDAR sensor 101 only covers approximately two thirds of the width of the vehicle 105. For example, the left hand LiDAR sensor 101 has a field of view 301 that extends to the passenger side seat of the vehicle 105, and the right hand LiDAR sensor 101 has a field of view 301 that extends to the driver's side seat of the vehicle 105. See McBride, Figure 2B. Thus, none of the sensors of McBride are arranged with a field of view that extends from a first vehicle side, at least across the interior, to an opposite second vehicle side as required by limitation (ii). (Applicant’s Arguments filed December 9, 2025, pg. 14).
Examiner notes that Applicant’s description of McBride, Figure 2B represents that a particular embodiment of the McBride’s invention may be represented by the circles given in 301a and 301b, but further notes that McBride: ¶ 018 describes that “the different sensors 101 may generate different respective fields of view that may overlap [and that] each of the fields of view 301a, 301b may have a detection range of 200 meters, i.e., a maximum distance away from each of the LIDAR sensor 101 in which, e.g., an object, may be detected by the respective LIDAR sensor.” In other words, McBride teaches sensors that would reach well outside of the vehicle in every direction only restricted by an obstructed field of vision, exactly as Applicant’s invention is. Therefore, Applicant’s argument is not persuasive because McBride’s other embodiments teach the limitation. Applicant further argues:
Turning to Breed, paragraphs [0093] and [0168] describe that a camera arranged on the A-pillar of the vehicle may be suitable for blind spot detection outside of the vehicle. However, and as described by the paragraphs [0125]-[0127] of Breed, Breed requires multiple sensor assemblies (denoted as elements 110-114 in Figures 1A-iD) to create a field of view of the interior of the vehicle. (Applicant’s Arguments filed December 9, 2025, pgs. 14-15).
Examiner disagrees and notes that Breed explicitly considers a single camera operation (Breed: ¶ 093: at least one active pixel camera for obtaining images of the environment of the vehicle). While Fig. 1A shows multiple cameras as a particular embodiment, a person of ordinary skill in the art would recognize that certain embodiment could comply with Breeds one-camera embodiment considered at ¶ 093. Consequently, McBride in view of Breed teaches or suggests an embodiment wherein the range 301a and 301b extends beyond the range of the vehicle such that the “ . . .field of the first sensor unit extends from a first vehicle side, at least across the interior, transversely to a longitudinal axis of the vehicle, to an opposite, second vehicle side into the exterior region.” Consequently, Applicant’s arguments and amendments are not persuasive.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure is Jiang (CN-108482385-A) which teaches a vehicle-mounted camera system that identifies the risk of and complexity of a traffic environment.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHARLES PALL whose telephone number is (571)272-5280. The examiner can normally be reached on M-F 9:30 - 18:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Angela Ortiz can be reached on 571-272-1206. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/C.P./Examiner, Art Unit 3663
/ANGELA Y ORTIZ/Supervisory Patent Examiner, Art Unit 3663