DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 28 October 2025 has been entered.
Status of Claims
This Office Action is in response to the application filed 28 October 2025. Claims 1-2, 4-13, and 15-22 are presently pending and are presented for examination.
Priority
Acknowledgement is made of applicant’s claim for foreign priority based on an application KR10-2022-0061419 filed in Republic of Korea on 19 May 2022.
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Response to Arguments
Applicant's arguments, see Remarks filed 28 October 2025, have been fully considered but they are not persuasive.
Applicant argues, see Remarks, pgs. 10-12, that the prior art references “fail to teach or suggest inventive features of the presently claimed invention, wherein the processor determines the operation threshold of the at least one safety device based on a combination of the occupant’s seat rotation angle and tilt…”. However, WO-2021207886-A1 (“Chen”) discloses determining an operation threshold of at least one safety device based on “the state information on the at least one of the seat…[and] the occupant…[including] at least one of a rotation angle of the seat, a position of the seat, a tilt of the seat, a rotation angle of the occupant, a position of the occupant, and a tilt of the occupant” and the “combination of the occupant’s seat rotation angle [i.e., seat yaw] and tilt [i.e., seat pitch],” as described in the translated document of Chen, paragraph 37: “The control module 100 determines the weight information of the occupant, the seat position information of the occupant [i.e., “the state information on the at least one of the seat”, which includes “the occupant’s seat rotation angle [i.e., seat yaw] and tilt [i.e., seat pitch]”] and the posture information of the occupant through the capacitance sensor 101, the position sensor 102 and the seat belt buckle tension sensor 103, and determines based on the weight information, seat position information [i.e., occupant’s seat rotation angle (seat yaw) and tilt (seat pitch)] and posture information[,] [t]he category information of the occupant [i.e., obtain state information on at least one of the seat or the occupant based on the information obtained from the first sensors]. Secondly, the control module 100 determines the burst parameters of the airbag and the force limit value of the pre-tightened force-limiting seat belt according to the category information [i.e., determine at least one safety device to be operated among the plurality of safe safety devices based on the state information on the at least one of the seat or the occupant]. In this way, different protective measures are taken for different occupants.”. The “state information” of the seat include a subset of orientations used in rigid body dynamics, seat yaw and seat pitch. It would be obvious to one of ordinary skill in the art, at the time of the application, to know that when considering rigid body dynamics (see “Rigid Body Dynamics”, Wikipedia, 2015-06-02), the six degrees of freedom apply (see “Degrees of Freedom (mechanics)”, Wikipedia, 2016-04-26), but not all six degrees of freedom may be relevant, depending on the rigid body being considered (i.e., a vehicle seat), and therefore any relevant combination of six or less orientations could be used [i.e., yaw, pitch, roll, left/right, forward/back and/or up/down]. Given that a seat is a 3-dimensional body, orientations of a “seat position” can be described using the six degrees of freedom or by any relevant combination of the six, or less, degrees of freedom and Chen discloses “seat position”. Therefore, any teaching that references “seat position” would be understood by one of ordinary skill in the art to include any relevant subset of orientations and any relevant subset would be an obvious variant of any other relevant subset. For these reasons, examiner is unpersuaded and maintains the corresponding rejections.
The remaining arguments are essentially the same as those addressed above and/or below and are unpersuasive for at least the same reasons. Therefore, examiner is unpersuaded and maintains the corresponding rejections.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “safety devices” and “safety device” in claim 1.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. The corresponding structure: “The plurality of safe devices includes a plurality of airbags provided at different positions within the vehicle, and a plurality of pre-safe seat belts (PSBs) provided in different seats in the vehicle” (para. 0012).
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-2, 4, 12-13, and 15 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by WO-2021207886-A1, hereinafter “Chen” (previously of record).
Regarding claim 1 and analogous claim 12, Chen discloses a vehicle for protecting an occupant (translated document of Chen, para. 5: “The embodiments of the present application provide an occupant protection method, device, system, terminal, and storage medium. Different safety control strategies are adopted for different types of occupants to protect the occupants in the vehicle.”), the vehicle comprising:
a plurality of safety devices provided in the vehicle and configured for protecting the occupant (translated document of Chen, para. 6: “The occupant protection system includes a capacitance sensor, a position sensor, a seat belt buckle tension sensor, a collision sensor, and an airbag And pre-tightened force-limiting seat belts…”);
first sensors configured to obtain information on a seat or the occupant within the vehicle (translated document of Chen, para. 7: “Determine occupant's weight information [i.e., information on…the occupant within the vehicle], occupant's seat position information [i.e., information on a seat] and occupant's posture information [i.e., information on…the occupant within the vehicle] through capacitance sensor, position sensor and seat belt buckle tension sensor [i.e., first sensors]…”);
second sensors configured to detect a collision of the vehicle with objects (translated document of Chen, para. 6: “The occupant protection system includes a capacitance sensor, a position sensor, a seat belt buckle tension sensor, a collision sensor [i.e., second sensors configured to detect a collision], and an airbag And pre-tightened force-limiting seat belts…”); and
a processor which is operatively connected to the plurality of safety devices, the first sensors, and the second sensors (translated document of Chen, para. 23: “On the other hand, an embodiment of the present application provides a terminal. The device includes a processor and a memory. The memory stores at least one instruction or at least one program. The at least one instruction or at least one program is loaded by the processor and executes the aforementioned occupant protection. method.”; para. 38: “In the embodiment of the present application, the control module 100 [i.e., a processor] may be set in a device, such as a vehicle terminal, a mobile terminal, a computer terminal, or a similar computing device.”; para. 37: “The control module 100 [i.e., a processor] determines the weight information of the occupant, the seat position information of the occupant and the posture information of the occupant through the capacitance sensor 101, the position sensor 102 and the seat belt buckle tension sensor 103 [i.e., operatively connected to…the first sensors]… Secondly, the control module 100 determines the burst parameters of the airbag and the force limit value of the pre-tightened force-limiting seat belt according to the category information [i.e., operatively connected to the plurality of safety devices]... If the control module 100 detects the collision signal sent by the collision sensor 104 [i.e., operatively connected to…the second sensors], obtains the collision information…), wherein the processor is configured to:
obtain state information on at least one of the seat or the occupant based on the information obtained from the first sensors (translated document of Chen, para. 37: “The control module 100 determines the weight information of the occupant [i.e., information on…the occupant], the seat position information of the occupant [i.e., information on…the seat] and the posture information of the occupant [i.e., information on…the occupant] through the capacitance sensor 101, the position sensor 102 and the seat belt buckle tension sensor 103 [i.e., obtained from the first sensors], and determines based on the weight information, seat position information and posture information [i.e., information on…the occupant] The category information [i.e., state information] of the occupant.”),
determine at least one safety device to be operated among the plurality of safety devices based on the state information on the at least one of the seat or the occupant (translated document of Chen, para. 37: “Secondly, the control module 100 determines the burst parameters of the airbag and the force limit value of the pre-tightened force-limiting seat belt according to the category information [i.e., based on the state information on the at least one of the seat or the occupant].”), and
operate the determined at least one safety device when at least one of the second sensors detects the collision satisfying a predetermined condition (translated document of Chen, para. 37: “If the control module 100 detects the collision signal sent by the collision sensor 104 [i.e., at least one of the second sensors], obtains the collision information, and sends on-site accident information to the rescue center server at the same time; the collision information includes the collision location and the collision intensity, and the on-site accident information includes category information and collision information. At the same time, the control module 100 determines whether the airbag deployment conditions are met [i.e., collision satisfying a predetermined condition] according to the collision information, and if so, the airbags are deployed according to the deployment parameters [i.e., operate the determined at least one safety device], and the occupants are restrained by the pretensioned force-limiting seat belts according to the force-limiting value [i.e., operate the determined at least one safety device].”), and
wherein the state information on the at least one of the seat or the occupant includes at least one of a rotation angle of the seat, a position of the seat, a tilt of the seat, a rotation angle of the occupant, a position of the occupant, and a tilt of the occupant (translated document of Chen, para. 37: “The control module 100 determines the weight information of the occupant, the seat position information of the occupant [i.e., includes…a position of the seat,] and the posture information of the occupant [i.e., includes…a rotation angle of the occupant, a position of the occupant, and a tilt of the occupant] through the capacitance sensor 101, the position sensor 102 and the seat belt buckle tension sensor 103, and determines based on the weight information, seat position information and posture information The category information of the occupant [i.e., state information on the at least one of the seat or the occupant].”),
wherein the processor is further configured to determine an operation threshold of the at least one safety device to be operated based on the state information on the at least one of the seat or the occupant (translated document of Chen, para. 8: “Determine the occupant's category information [i.e., state information on the at least one of the seat or the occupant] based on weight information, seat position information [i.e., a position of the seat] and posture information [i.e., a rotation angle of the occupant, a position of the occupant, and a tilt of the occupant];”; para. 37: “Secondly, the control module 100 [i.e., the processor] determines the burst parameters of the airbag and the force limit value of the pre-tightened force-limiting seat belt [i.e., determine an operation threshold of the at least one safety device to be operated] according to the category information [i.e., based on the state information on the at least one of the seat or the occupant]. In this way, different protective measures are taken for different occupants…At the same time, the control module 100 determines whether the airbag deployment conditions are met according to the collision information, and if so, the airbags are deployed according to the deployment parameters [i.e., determine an operation threshold of the at least one safety device to be operated], and the occupants are restrained by the pretensioned force-limiting seat belts according to the force-limiting value [i.e., determine an operation threshold of the at least one safety device to be operated].”),
compare an impact strength detected from at least one of the second sensors with the operation threshold (translated document of Chen, para. 37: “At the same time, the control module 100 determines whether the airbag deployment conditions are met according to the collision information [i.e., compare an impact strength detected from at least one of the second sensors with the operation threshold], and if so, the airbags are deployed according to the deployment parameters, and the occupants are restrained by the pretensioned force-limiting seat belts according to the force-limiting value.”; Note: It is known to one of ordinary skill in the art, at the time of the application, that collision impact strength thresholds are used to determine whether and/or when to deploy collision-related safety systems. See translated document of DE-102014223618-A1, published on 19 May 2016, paragraph 0007: “Known systems record environmental or accident scenarios of the motor vehicle by means of a sensor system and determine a deceleration therefrom, for example by means of acceleration sensors [i.e., detected from…sensors]. This recognizes an acceleration level which is used to decide how strong a collision is [i.e., compare an impact strength…with the operation threshold]. From this, a decision can be made as to whether or not an airbag is to be deployed [i.e., operate the determined at least one safety device when the detected impact strength is greater than the operation threshold]. In the event of an impact, it is decided in which hardness the airbag is to be deployed, and a two-stage deployment is known.”), and
operate the determined at least one safety device when the detected impact strength is greater than the operation threshold (translated document of Chen, para. 11: “If it is determined according to the collision information that the airbag deployment conditions are met [i.e., detected impact strength is greater than the operation threshold], the airbags are deployed according to the deployment parameters [i.e., operate the determined at least one safety device], and the occupants are restrained by the pre-tensioned force-limiting seat belts according to the force-limiting value [i.e., operate the determined at least one safety device].”; para. 37: “…the collision information includes the collision location and the collision intensity [i.e., impact strength]…”; Note: It is known to one of ordinary skill in the art, at the time of the application, that collision impact strength thresholds are used to determine whether and/or when to deploy collision-related safety systems. See translated document of DE-102014223618-A1, published on 19 May 2016, paragraph 0007: “Known systems record environmental or accident scenarios of the motor vehicle by means of a sensor system and determine a deceleration therefrom, for example by means of acceleration sensors [i.e., detected from…sensors]. This recognizes an acceleration level which is used to decide how strong a collision is [i.e., compare an impact strength…with the operation threshold]. From this, a decision can be made as to whether or not an airbag is to be deployed [i.e., operate the determined at least one safety device when the detected impact strength is greater than the operation threshold]. In the event of an impact, it is decided in which hardness the airbag is to be deployed, and a two-stage deployment is known.”), and
wherein the processor determines the operation threshold of the at least one safety device based on a combination of the occupant's seat rotation angle and tilt (translated document of Chen, para. 8: “Determine [i.e., processor determines] the occupant's category information based on weight information, seat position information [i.e., based on a combination of the occupant's seat rotation angle and tilt] and posture information;”; para. 37: “Secondly, the control module 100 determines the burst parameters of the airbag and the force limit value of the pre-tightened force-limiting seat belt [i.e., processor determines the operation threshold of the at least one safety device] according to the category information [i.e., based on a combination of the occupant's seat rotation angle and tilt]. In this way, different protective measures are taken for different occupants…At the same time, the control module 100 determines whether the airbag deployment conditions are met according to the collision information, and if so, the airbags are deployed according to the deployment parameters [i.e., processor determines the operation threshold of the at least one safety device], and the occupants are restrained by the pretensioned force-limiting seat belts according to the force-limiting value [i.e., processor determines the operation threshold of the at least one safety device].”).
Regarding claim 2 and analogous claim 13, Chen discloses the vehicle of claim 1,
wherein the plurality of safety devices includes at least one of a plurality of airbags provided at different positions within the vehicle, and a plurality of pre-safety seat belts (PSBs) provided in different seats in the vehicle (translated document of Chen, para. 6: “The occupant protection system includes a capacitance sensor, a position sensor, a seat belt buckle tension sensor, a collision sensor, and an airbag And pre-tightened force-limiting seat belts…”).
Regarding claim 4 and analogous claim 15, Chen discloses the vehicle of claim 1,
wherein the first sensors include at least one of a sensor configured to detect the rotation angle of the seat, a sensor configured to detect the position of the seat, or a sensor configured to detect the tilt of the seat (translated document of Chen, para. 18: “…the embodiments of the present application provide an occupant protection system, including a capacitance sensor [i.e., the first sensors], a position sensor [i.e., the first sensors], a seat belt buckle tension sensor [i.e., the first sensors], a collision sensor, an airbag, and a pre-tightened force-limiting seat belt…”; para. 20: “…Position sensor, used to determine the seat position information of the occupant [i.e., at least one of a sensor configured to…detect the position of the seat,]…”).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 5 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chen as applied to claim 1 and claim 12 above, and further in view of US-20210402942-A1, hereinafter “Torabi” (previously of record).
Regarding claim 5 and analogous claim 16, Chen discloses the vehicle of claim 1, but does not appear to explicitly disclose the following:
wherein the first sensors include a camera configured to capture the occupant, and wherein the processor is further configured to: extract three-dimensional (3D) human body keypoints from an image captured by the camera by use of an artificial neural network-based deep learning model, and obtain the state information on the occupant based on the extracted 3D human body keypoints.
However, in the same field of endeavor, Torabi teaches:
wherein the first sensors include a camera configured to capture the occupant (Torabi, para. 0027: “In more detail, the system may include one or more machine learning models and/or deep neural networks (DNNs)—such as convolutional neural networks (CNNs)—for processing sensor data (e.g., image data from one or more in-cabin cameras) [i.e., first sensors include a camera] to determine a pose, shape, position, size, hand activity, body activity, and/or other physical characteristics of occupants of the vehicle [i.e., capture the occupant], and to determine one or more actions or procedures as a result of the determinations.”), and
wherein the processor is further configured to (Torabi, para. 0071: “Now referring to FIGS. 7, 8, and 9, each block of methods 700, 800, and 900, described herein, comprises a computing process that may be performed using any combination of hardware, firmware, and/or software. For instance, various functions may be carried out by a processor executing instructions stored in memory.”):
extract three-dimensional (3D) human body keypoints from an image captured by the camera by use of an artificial neural network-based deep learning model (Torabi, para. 0027: “In more detail, the system may include one or more machine learning models and/or deep neural networks (DNNs)—such as convolutional neural networks (CNNs)—for processing sensor data [i.e., by use of an artificial neural network-based deep learning model] (e.g., image data from one or more in-cabin cameras) to determine a pose, shape, position, size, hand activity, body activity, and/or other physical characteristics of occupants of the vehicle, and to determine one or more actions or procedures as a result of the determinations.”; para. 0031: “In this way, the body-pose estimator and shape reconstructor 102 (e.g. using a body-pose estimation and shape reconstruction network) may simultaneously localize body key points for one or more occupants (e.g., driver and any passengers) in an in-cabin scene in an image space [i.e., from an image captured by the camera]. In particular, the body-pose estimator and shape reconstructor 102 may generate 2D (e.g., (x,y)) or 3D (e.g., (x,y,z)) projections (e.g., estimated skeletal models or rigs) based on the estimated body key points [i.e., extract three-dimensional (3D) human body keypoints].”), and
obtain the state information on the occupant based on the extracted 3D human body keypoints (Torabi, para. 0031: “ In particular, the body-pose estimator and shape reconstructor 102 may generate 2D (e.g., (x,y)) or 3D (e.g., (x,y,z)) projections (e.g., estimated skeletal models or rigs) based on the estimated body key points [i.e., based on the extracted 3D human body keypoints]. For instance, estimated skeletal models may be based on body-pose information (e.g. from the first branch of the network) and this information may be used to compute a shape—or non-rigid deformations of a model surface—such that current activities, postures, and/or gestures of occupants [i.e., obtain the state information on the occupant] can be monitored and acted upon by the system.”).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable likelihood of success to modify the invention disclosed by Chen, with the concept of using an artificial neural network to process 3-D human body key points from images of vehicle occupants and use the data to obtain vehicle occupant information, taught by Torabi, in order to determine the sizes, actions, postures or poses of vehicle occupants and use that information to determine how/whether to use the vehicle safety systems (Torabi, para. 0002: “In conventional systems, dedicated sensors have been used that may allow for monitoring a driver or passenger within a vehicle. For instance, sensors may be used to detect the presence of a person using a weight sensor or heat sensor and, based on this detected weight or heat, human-machine interactions—such as airbag deployment—may be adjusted. However, merely relying on a weight of or heat generated by a person does not provide comprehensive analysis of what is happening in a vehicle—such as a posture or pose of individuals inside of the cabin. As such, some conventional systems have attempted to use raw images in an attempt to identify the presence of persons in a vehicle; however, such systems are often limited to a location of passengers in the vehicle and likewise fail at identifying actions, postures, or poses of a driver (e.g., hands on the wheel, hands off the wheel, preoccupied texting, reading, etc.) and/or passengers of the vehicle. As a result, the determinations of these systems may be limited, and may not result in determinations by the system that result in a safest or most comfortable action or outcome.”).
Claim(s) 6, 10, 17, and 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chen as applied to claim 1 and claim 12 above, and further in view of Torabi and CN-113556975-A, hereinafter “Gronau” (previously of record).
Regarding claim 6 and analogous claim 17, Chen and Torabi teach the vehicle of claim 5, but do not appear to explicitly teach the following:
wherein the artificial neural network-based deep learning model is trained based on a new 3D body joint coordinate true which is generated by transforming a 3D body joint coordinate truth value.
However, in the same field of endeavor, Gronau teaches:
wherein the artificial neural network-based deep learning model is trained based on a new 3D body joint coordinate true which is generated by transforming a 3D body joint coordinate truth value (translated document of Gronau, para. 255: “Alternatively or in combination, a 2D image or a 3D image or a combination of the two can be input to a neural network, which can be trained [i.e., artificial neural network-based deep learning model is trained] to estimate the weight of the identified object from these images. In both cases, the training session can include improving the estimate in order to reduce the cost function of comparing the estimate with the ground truth values [i.e., based on a new 3D body joint coordinate true which is generated by transforming a 3D body joint coordinate truth value] of height and weight.”).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable likelihood of success to modify the invention disclosed by Chen, as modified by Torabi, with the concept of training a deep learning model with image data containing a ground truth, taught by Gronau, in order to provide the “correct answers” from which the model learns. (Note: One of ordinary skill in the art, at the time of the application, would know that supervised deep learning models need to learn from “correct answers”, which ground truth data provides.; Computer Vision Metrics, Chapter 7 – Ground Truth Data, Content, Metrics, and Analysis (2014): “In the context of computer vision, ground truth data includes a set of images, and a set of labels on the images, and defining a model for object recognition as discussed in Chapter 4, including the count, location, and relationships of key features. The labels are added either by a human or automatically by image analysis, depending on the complexity of the problem. The collection of labels, such as interest points, corners, feature descriptors, shapes, and histograms, form a model.” ).
Regarding claim 10 and analogous claim 21, Chen, Torabi, and Gronau teach the vehicle of claim 6, and Gronau further teaches:
wherein the processor is further configured to: measure a distance to a keypoint corresponding to a predetermined body portion among the 3D human body keypoints, and determine the position of the occupant based on the measured distance (translated document of Gronau, para. 276: “Generally, the generated skeleton model is used to identify the orientation/posture/distance of the occupant in the image [i.e., determine the position of the occupant based on the measured distance] obtained from the imaging device. Specifically, each of the skeletal models contains data such as the 3D key points (x, y, z) of the occupant relative to the XYZ coordinate system, where the (x, y) point represents the joint surface of the occupant's body in the obtained image Position, and (z) represents the distance between the relevant (x,y) keypoint surface [i.e., measure a distance to a keypoint corresponding to a predetermined body portion among the 3D human body keypoints] and the image sensor.”).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable likelihood of success to modify the invention disclosed by Chen, as modified by Torabi and Gronau, with the concept of measuring a distance of an occupant body key point in a vehicle, taught by Gronau, in order to accurately determine the position of the occupant in the vehicle and therefore comprehend how they will or will not be affected by a collision and the resulting vehicle safety systems (translated document of Gronau, para. 7: “Specifically, driver and passenger sensing in the vehicle cabin requires a high degree of accuracy, especially safety-related applications, such as seat belt reminders (SBR), such as out-of-position indication (OOP) for airbag suppression, and Driver Monitoring System (DMS) used to alert the driver. In addition, today's advanced driver assistance systems (ADAS) and autonomous cars require precise information about the driver and passengers in the vehicle cabin.”).
Claim(s) 7-9, 11, 18-20, and 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chen as applied to claim 1 and claim 12 above, and further in view of Torabi, Gronau, and US-20210150754-A1, hereinafter “Tanaka” (previously of record).
Regarding claim 7 and analogous claim 18, Chen, Torabi, and Gronau teach the vehicle of claim 6, but do not appear to explicitly teach the following:
wherein the processor is further configured to: estimate a first rotation angle between a predetermined first reference line and a shoulder line in an x-y plane based on the 3D human body keypoints, and determine the first rotation angle as the rotation angle of the occupant, and wherein the predetermined first reference line is set parallel to the shoulder line when a body of the occupant faces a front of the vehicle.
However, in the same field of endeavor, Tanaka teaches:
wherein the processor (Tanaka, para. 0030: “The vehicle body A includes a human body information processing device C (an example of a physique estimation device) that specifies seating positions of occupants (including a driver) from the images captured by the camera 6, acquires skeleton point coordinates of a plurality of locations from a body image of each occupant, and estimates a skeleton of the occupant.”) is further configured to:
estimate a first rotation angle between a predetermined first reference line and a shoulder line in an x-y plane based on the 3D human body keypoints, (Tanaka, Fig. 8; para. 0056: “FIG. 8 illustrates a body image in which the upper body of the occupant seated in the predetermined seat S is in a twisted posture. The same figure illustrates a line obtained by connecting the skeleton points (skeleton point coordinates) of the left and right shoulders as a twisted shoulder line Laz [i.e., a shoulder line in an x-y plane based on the 3D human body keypoints] (skeleton line), a line obtained by connecting the skeleton points (skeleton point coordinates) of the left and right portions of the waist as a twisted waist line Lbz (skeleton line), and the spine as a twisted spine line Lcz. In this twisted posture, in FIG. 8 the twisted shoulder line Laz is inclined with respect to the shoulder line La of the reference posture [i.e., a predetermined first reference line] by a twisted angle θ4 (angle information).”) and
determine the first rotation angle as the rotation angle of the occupant (Tanaka, para. 0056: “In this twisted posture, in FIG. 8 the twisted shoulder line Laz is inclined with respect to the shoulder line La of the reference posture by a twisted angle θ4 [i.e., first rotation angle as the rotation angle of the occupant] (angle information).”), and
wherein the predetermined first reference line is set parallel to the shoulder line when a body of the occupant faces a front of the vehicle (Tanaka, para. 0056: “In this twisted posture, in FIG. 8 the twisted shoulder line Laz is inclined with respect to the shoulder line La [i.e., predetermined first reference line] of the reference posture by a twisted angle θ4 (angle information).”; para. 0046: “FIG. 5 illustrates a body image obtained by capturing an image of the occupant sitting in a normal posture (hereinafter referred to as a reference posture) on a predetermined seat S from an oblique angle by the camera 6. The shoulder line La and the waist line Lb are horizontal, and the spine line Lc extends in a vertical direction [i.e., set parallel to the shoulder line when a body of the occupant faces a front of the vehicle].”).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable likelihood of success to modify the invention disclosed by Chen, as modified by Torabi, and Gronau, with the concept of using a shoulder line (a line connecting the body key points on each shoulder) to determine the rotation angle of a person, taught by Tanaka, in order to determine the posture of the occupant and effectively deploy vehicle safety systems (Tanaka, para. 0006: “When a passenger car is described as an example, it is possible to grasp a driving state from a driving posture of the driver seated in a driver seat, and a posture of an upper body of the driver seated in the driver seat is important from a viewpoint of protecting the driver when an air bag is activated.”; para. 0007: “Further, in a vehicle provided with side air bags, it is desirable to prevent a posture of an upper body of an occupant seated in an assistant passenger seat or a rear passenger seat from leaning against inner walls on which the side air bags are disposed.”).
Regarding claim 8 and analogous claim 19, Chen, Torabi, and Gronau teach the vehicle of claim 6, but do not appear to explicitly teach the following:
wherein the processor is further configured to: estimate a second rotation angle based on the 3D human body keypoints based on a width and a height of a body in a y-z plane, and determine the second rotation angle as the rotation angle of the occupant.
However, in the same field of endeavor, Tanaka teaches:
wherein the processor (Tanaka, para. 0030: “The vehicle body A includes a human body information processing device C (an example of a physique estimation device) that specifies seating positions of occupants (including a driver) from the images captured by the camera 6, acquires skeleton point coordinates of a plurality of locations from a body image of each occupant, and estimates a skeleton of the occupant.”) is further configured to:
estimate a second rotation angle based on the 3D human body keypoints based on a width and a height of a body in a y-z plane (Tanaka, Fig. 7; para. 0065: “Further, as illustrated in FIG. 7, the upper body of the occupant is in the lateral leaning posture. When the lateral leaning shoulder line Lay is inclined by the lateral leaning shoulder angle θ2 [i.e., a second rotation angle] with respect to the reference line Lap, and the lateral leaning spine line Lcy is inclined by the lateral leaning waist angle θ3 with respect to the waist line Lby, the posture determination unit 25 easily acquires a displacement amount of the upper body in a lateral direction.”; para. 0066: “Similarly, as illustrated in FIG. 7, when the lateral leaning spine line Lcy is inclined by the lateral leaning waist angle θ3 with respect to the waist line Lby, the posture determination unit 25 acquires a displacement amount of the upper body based on this lateral leaning waist angle θ3.”), and
determine the second rotation angle as the rotation angle of the occupant (Tanaka, para. 0065: “When the lateral leaning shoulder line Lay is inclined by the lateral leaning shoulder angle θ2 with respect to the reference line Lap, and the lateral leaning spine line Lcy is inclined by the lateral leaning waist angle θ3 with respect to the waist line Lby, the posture determination unit 25 easily acquires a displacement amount of the upper body in a lateral direction.”).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable likelihood of success to modify the invention disclosed by Chen, as modified by Torabi, and Gronau, with the concept of using a width and a height of a body in a certain two-dimensional plane, when the body is an occupant in a vehicle, taught by Tanaka, in order to determine the rotation angle of the body in order to prioritize the usage of vehicle safety systems (Tanaka, para. 0006: “When a passenger car is described as an example, it is possible to grasp a driving state from a driving posture of the driver seated in a driver seat, and a posture of an upper body of the driver seated in the driver seat is important from a viewpoint of protecting the driver when an air bag is activated.”; para. 0007: “Further, in a vehicle provided with side air bags, it is desirable to prevent a posture of an upper body of an occupant seated in an assistant passenger seat or a rear passenger seat from leaning against inner walls on which the side air bags are disposed.”).
Regarding claim 9 and analogous claim 20, Chen, Torabi, and Gronau teach the vehicle of claim 6, but do not appear to explicitly teach the following:
wherein the processor is further configured to: estimate a first rotation angle between a predetermined first reference line and a shoulder line in an x-y plane based on the 3D human body keypoints, estimate a second rotation angle based on the 3D human body keypoints based on a width and a height of a body in a y-z plane, and determine the rotation angle of the occupant based on the first rotation angle and the second rotation angle.
However, in the same field of endeavor, Tanaka teaches:
wherein the processor (Tanaka, para. 0030: “The vehicle body A includes a human body information processing device C (an example of a physique estimation device) that specifies seating positions of occupants (including a driver) from the images captured by the camera 6, acquires skeleton point coordinates of a plurality of locations from a body image of each occupant, and estimates a skeleton of the occupant.”) is further configured to:
estimate a first rotation angle between a predetermined first reference line and a shoulder line in an x-y plane based on the 3D human body keypoints (Tanaka, Fig. 8; para. 0056: “FIG. 8 illustrates a body image in which the upper body of the occupant seated in the predetermined seat S is in a twisted posture. The same figure illustrates a line obtained by connecting the skeleton points (skeleton point coordinates) of the left and right shoulders as a twisted shoulder line Laz [i.e., a shoulder line in an x-y plane based on the 3D human body keypoints] (skeleton line), a line obtained by connecting the skeleton points (skeleton point coordinates) of the left and right portions of the waist as a twisted waist line Lbz (skeleton line), and the spine as a twisted spine line Lcz. In this twisted posture, in FIG. 8 the twisted shoulder line Laz is inclined with respect to the shoulder line La of the reference posture [i.e., a predetermined first reference line] by a twisted angle θ4 (angle information).”),
estimate a second rotation angle based on the 3D human body keypoints based on a width and a height of a body in a y-z plane (Tanaka, Fig. 7; para. 0065: “Further, as illustrated in FIG. 7, the upper body of the occupant is in the lateral leaning posture. When the lateral leaning shoulder line Lay [i.e., based on a width…of a body] is inclined by the lateral leaning shoulder angle θ2 [i.e., a second rotation angle] with respect to the reference line Lap, and the lateral leaning spine line Lcy is inclined by the lateral leaning waist angle θ3 with respect to the waist line Lby [i.e., based on a width…of a body], the posture determination unit 25 easily acquires a displacement amount of the upper body in a lateral direction.”; para. 0066: “Similarly, as illustrated in FIG. 7, when the lateral leaning spine line Lcy [i.e., based on…a height of a body] is inclined by the lateral leaning waist angle θ3 with respect to the waist line Lby, the posture determination unit 25 acquires a displacement amount of the upper body based on this lateral leaning waist angle θ3.”), and
determine the rotation angle of the occupant based on the first rotation angle and the second rotation angle (Tanaka, para. 0116: “As a configuration in addition to the above configuration, the physique estimation device may further include a posture determination unit configured to determine, based on the angle information, a posture of an upper body of the occupant seated in a seat in the vehicle.”; para. 0065: “Further, as illustrated in FIG. 7, the upper body of the occupant is in the lateral leaning posture. When the lateral leaning shoulder line Lay is inclined by the lateral leaning shoulder angle θ2 [i.e., first rotation angle] with respect to the reference line Lap, and the lateral leaning spine line Lcy is inclined by the lateral leaning waist angle θ3 [i.e., second rotation angle] with respect to the waist line Lby, the posture determination unit 25 easily acquires a displacement amount of the upper body in a lateral direction [i.e., rotation angle of the occupant].”).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable likelihood of success to modify the invention disclosed by Chen, as modified by Torabi, and Gronau, with the concept of determining a rotation angle or displacement amount of a body based on more than one rotational angle of the body, taught by Tanaka, in order to determine the comprehensive rotation angle of the body in order to prioritize the usage of vehicle safety systems (Tanaka, para. 0006: “When a passenger car is described as an example, it is possible to grasp a driving state from a driving posture of the driver seated in a driver seat, and a posture of an upper body of the driver seated in the driver seat is important from a viewpoint of protecting the driver when an air bag is activated.”; para. 0007: “Further, in a vehicle provided with side air bags, it is desirable to prevent a posture of an upper body of an occupant seated in an assistant passenger seat or a rear passenger seat from leaning against inner walls on which the side air bags are disposed.”).
Regarding claim 11 and analogous claim 22, Chen, Torabi, and Gronau teach the vehicle of claim 6, but do not appear to explicitly teach the following:
wherein the processor is further configured to: estimate an angle between a predetermined second reference line and a line connecting keypoints corresponding to a predetermined body portion among the 3D human body keypoints, and determine the estimated angle as the tilt of the occupant, and wherein the predetermined second reference line is perpendicular to the ground.
However, in the same field of endeavor, Tanaka teaches:
wherein the processor (Tanaka, para. 0030: “The vehicle body A includes a human body information processing device C (an example of a physique estimation device) that specifies seating positions of occupants (including a driver) from the images captured by the camera 6, acquires skeleton point coordinates of a plurality of locations from a body image of each occupant, and estimates a skeleton of the occupant.”) is further configured to:
estimate an angle between a predetermined second reference line and a line connecting keypoints corresponding to a predetermined body portion among the 3D human body keypoints (Tanaka, para. 0051: “In the forward leaning posture, the forward leaning spine line Lcx [i.e., a line connecting keypoints corresponding to a predetermined body portion] is forward leaned by a forward leaning angle θ1 [i.e., an angle] (an example of angle information) as compared with the spine line Lc in the reference posture [i.e., a predetermined second reference line].”), and
determine the estimated angle as the tilt of the occupant (Tanaka, para. 0051: “In the forward leaning posture, the forward leaning spine line Lcx is forward leaned by a forward leaning angle θ1 (an example of angle information) as compared with the spine line Lc in the reference posture.”), and
wherein the predetermined second reference line is perpendicular to the ground (Tanaka, Fig. 6: line Lc; para. 0036: “Further, the spine line Lc [i.e., predetermined second reference line] can be regarded as a straight line in a vertical posture [i.e., perpendicular to the ground] along the spine of the occupant.”; para. 0037: “In the skeleton recognition processing, the coordinate of each joint position is determined with reference to a skeleton model stored in the skeleton model storage unit 12.”).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable likelihood of success to modify the invention disclosed by Chen, as modified by Torabi and Gronau, with the concept of estimating an angle between a vertical reference line and a line created by connecting vehicle occupant body key points, taught by Tanaka, in order to determine the posture of the occupant and effectively deploy vehicle safety systems (Tanaka, para. 0006: “When a passenger car is described as an example, it is possible to grasp a driving state from a driving posture of the driver seated in a driver seat, and a posture of an upper body of the driver seated in the driver seat is important from a viewpoint of protecting the driver when an air bag is activated.”; para. 0007: “Further, in a vehicle provided with side air bags, it is desirable to prevent a posture of an upper body of an occupant seated in an assistant passenger seat or a rear passenger seat from leaning against inner walls on which the side air bags are disposed.”).
Additional Relevant Art
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
JP-2017193283-A (2017-10-26) | Relevant to amended claims 1 and 12.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Leah N Miller whose telephone number is (703)756-1933. The examiner can normally be reached M-Th 8:30am - 5:30pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Helal Algahaim can be reached on (571) 270-5227. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/L.N.M./Examiner, Art Unit 3666
/HELAL A ALGAHAIM/SPE , Art Unit 3666