DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 14 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because a computer program product is not defined in the specification and may include software per se or signals per se, which are ineligible subject matter. See MPEP 2106.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-15 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Barth et al (US20210078609).
Regarding claim 1, Barth discloses a computer-implemented method for driving assistance in a vehicle (para. [0012]), the method comprising:
determining, based on an image capturing an operating element of the vehicle and a driver of the vehicle (para. [0115], Due to the mounting position of the sensor 11′ the corresponding image data provided by the sensor 11′ captures the steering wheel 14 and also a region around the steering wheel 14. This allows detecting hands of the driver 13 when they are not grasping the steering wheel 14), a distance between the operating element and a hand of the driver (d in fig. 2; para. [0060], determining a distance between the detected steering element and the detected at least one hand; para. [0135]); and
in response to the distance meeting a criterion being indicative of a possibility that the driver operates the operating element (para. [0039], a state in which the at least one hand cooperates with the steering element can comprise that the at least one hand touches the steering element and/or is in close proximity to the steering element, wherein “close proximity” means that the distance between the hand and the steering element is assumed to be below a threshold; para. [0058], a vision-based approach can be configured such that a meaningful estimation parameter about the spatial position of the hands can be generated when one or more hands are merely in close proximity to the steering element):
determining, based on a part of the image capturing the operating element, classification information of the operating element (para. [0079]; para. [0133], On the basis of the detected steering wheel 14 an image portion 26 is then determined in step 28 by cropping the image 18 to the steering wheel 14 including a margin around the steering wheel 14, cf. FIGS. 2a and 2b. The steering wheel portion 26 is then processed further by a neural network in step 30 in order to obtain the likelihood value p1);
determining, based on a part of the image capturing the driver, classification information of a pose of the driver (para. [0031], [0082], [0134]); and
determining, based on the distance (40 in fig. 5; para. [0135]), the classification information of the operating element (22, 28 and 30 in fig. 5; para. [0133]) and the classification information of the pose of the driver (32, 36, and 38 in fig. 5A; para. [0134]), whether the driver operates the operating element (20 in fig. 5A; para. [0152], The fusion rule can be configured to output a fused likelihood value p on the basis of the individual likelihood values p1, p2, p3, p4, wherein the fused likelihood value is an information on whether one or both of the hands 24, 24′ cooperate with the steering wheel 14; para. [0153]).
Regarding claim 2, Barth discloses a computer-implemented method further comprising:
defining a region in the image including at least a part of the operating element or detecting, using a detection network trained to locate parts of the operating element, the region including at least the part of the operating element (28 and 30 in fig. 5A, 26 in fig 5B, para. [0079], [0133]); and
determining, based on the region, the location of the operating element (para. [0064], [0080]).
Regarding claim 3, Barth discloses a computer-implemented method further comprising:
determining, based on a plurality of body keypoints of the driver, one or more body keypoints indicating the hand of the driver (para. [0025], This can comprise detecting characteristic key points of the body in the image data and/or fitting geometric models to parts of the body, for example to the hands); and
determining, based on the one or more body keypoints, the location of the hand (para. [0031], [0046]).
Regarding claim 4, Barth discloses a computer-implemented method wherein determining classification information of the operating element includes:
classifying, using a classification network for detecting operating of the operating element, the part of the image to detect whether the hand of the driver is located proximate to the operating element so as to allow operating the operating element (36 and 38 in fig. 5A, 34 in fig 5B, para. [0134]); and
determining, based on the classifying, the classification information of the operating element indicating whether the driver operates the operating element (para. [0152]).
Regarding claim 5, Barth discloses a computer-implemented method wherein the classification network for detecting operating of the operating element is trained to detect whether the hand of the driver captured in the image can be located proximate to the operating element (para. [0034], [0067], [0070]).
Regarding claim 6, Barth discloses a computer-implemented method wherein determining classification information of the pose of the driver includes:
classifying, using a classification network for detecting a body pose of the driver, the driver captured in the image to detect the body pose of the driver (para. [0016], the hands can be considered as a portion of the body; para. [0057]); and
determining, based on the classifying, the classification information indicating whether the body pose of the driver is to allow the driver operating the operating element (para. [0057]).
Regarding claim 7, Barth discloses a computer-implemented method wherein the classification network of the body pose of the driver is trained to detect, based on a plurality of body keypoints of the driver (para. [0025]), body poses of the driver captured in the image (para. [0034], [0067], [0070]).
Regarding claim 8, Barth discloses a computer-implemented method wherein determining classification information of the pose of the driver includes:
classifying, using a classification network for detecting a hand pose of the driver, the driver captured in the image to detect the hand pose of the driver (para. [0016], the hands can be considered as a portion of the body; para. [0057]); and
determining, based on the classifying, the classification information indicating whether the hand pose of the driver is to allow operating the operating element (para. [0031], [0152]).
Regarding claim 9, Barth discloses a computer-implemented method wherein the classification network for detecting the hand pose of the driver is trained to detect, based on a plurality of body keypoints related to the hand of the driver, hand poses of the driver captured in the part of the image (para. [0031]).
Regarding claim 10, Barth discloses a computer-implemented method further comprising:
cropping, based on the location of the operating element, the image to generate the part of the image capturing the operating element (para. [0133]).
Regarding claim 11, Barth discloses a computer-implemented method further comprising:
cropping, based on a plurality of body keypoints of the driver and/or using a classification network for detecting at least a part of the driver in the image, the image to generate the part of the image capturing at least the part of the driver (para. [0082]-[0084], [0134]).
Regarding claim 12, Barth discloses a computer-implemented method further comprising:
generating a control signal for driving assistance indicating whether the driver operates the operating element (para. [0012], [0117]).
Regarding claims 13-15, the claims recite similar subject matter as claim 1 and are rejected for the same reasons as stated above.
Related Art
Schiebener et al (US20200320737) – see figs. 2a-2b, para. [0067]-[0072]
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LEON VIET Q NGUYEN whose telephone number is (571)270-1185. The examiner can normally be reached Mon-Fri 11AM-7PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gregory Morse can be reached at 571-272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/LEON VIET Q NGUYEN/ Primary Examiner, Art Unit 2663