Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on January 9, 2026 has been entered.
Response to Arguments
Applicant’s arguments and amendment have persuasively overcome the claims objections and most of the 112 rejections.
As to the 101 rejection, MPEP 2106.04(d)(II) states “Examiners evaluate integration into a practical application by: (1) identifying whether there are any additional elements recited in the claim beyond the judicial exception(s); and (2) evaluating those additional elements individually and in combination to determine whether they integrate the exception into a practical application … .” Because there are no additional elements, the below analysis comports with the latest eligibility guidance.
As to the 103 rejections, Applicant’s assertion that the claim features are not taught is addressed in the below mapping.
The remaining issues are addressed below.
Claim Objections
The below claims are objected to because of the following informalities:
Claims 1, 8, and 15 recite “landmarks that includes,” but there should not be an ‘s’ in “includes” because “landmarks” is plural.
Claims 2 and 9 recite “learnnig" instead of “learning.”
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-19 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 1, 8, and 15 recite “and based on optical data captured by an optical lens and sensor data resulting from processing the optical data by a sensor, wherein the sensor data includes an image.” Given that the claim already recites this data is from videoconferencing (i.e., it is sensor data gathered through an optical lens), it is unclear what this limitation adds.
Claims 1, 8, and 15 recite “the head pose of the user is at zero degree angle relative to a horizontal axis,” but it is unclear how to assess the angle of a head pose (e.g., if the person is sitting upright, facing forward, is that ninety degrees from horizontal?) The examiner’s guess is that the angle is that of the line between the eye corners and implicitly assumes that the user’s face is perfectly symmetric.
Claims 3, 10, and 17 recite “key” points, but this is subjective because there is not an objective standard. MPEP 2173.05(b)(IV). Reciting “eye corners” overcomes this rejection.
Dependent claims are likewise rejected.
The following is a quotation of 35 U.S.C. 112(d):
(d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers.
The following is a quotation of pre-AIA 35 U.S.C. 112, fourth paragraph:
Subject to the following paragraph [i.e., the fifth paragraph of pre-AIA 35 U.S.C. 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers.
Claims 2, 9, and 16 are rejected under 35 U.S.C. 112(d) or pre-AIA 35 U.S.C. 112, 4th paragraph, as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends.
Claims 2 and 9 fail to further limit their parent claims. As an aside, the examiner has not identified support in the specification for a technique to estimate head pose that is not based on deep learning.
Claim 16 fails to further limit claim 15. Note that claim 16 does not place any requirements on the relative angle (e.g., is a certain number of degrees).
Applicant may cancel the claim(s), amend the claim(s) to place the claim(s) in proper dependent form, rewrite the claim(s) in independent form, or present a sufficient showing that the dependent claim(s) complies with the statutory requirements.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea (mental process) without significantly more.
Step 1: Claim 1 (and its dependents) recite a method, and processes are eligible subject matter.
Claim 8 (and its dependents) recite a system, and machines are eligible subject matter.
Claim 15 (and its dependents) recite a non-transitory computer readable medium, and manufactures are eligible subject matter.
Step 2A, prong one: All of the elements of claims 1-20 are a mental process because a person can look at someone else and tell them to tilt their head. MPEP 2106.04(a)(2)(III)(C) explains that use of a generic computer or in a computer environment is still a mental process. In particular, this section begins by citing Gottschalk v. Benson, 409 US 63 (1972). “The Supreme Court recognized this in Benson, determining that a mathematical algorithm for converting binary coded decimal to pure binary within a computer’s shift register was an abstract idea.” In Benson the Supreme Court did not separately analyze the computer hardware at issue; the specifics of what hardware was claimed is only included in an appendix to the decision.
Because there are no additional elements, no further analysis is required for Step 2A, prong two or Step 2B.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-19 (all claims) are rejected under 35 U.S.C. 103 as being unpatentable over US20220239513A1 (“Swierk”) in view of US20140139655A1 (“Mimar”).
A method comprising:
detecting, by a processor, facial landmarks of a user based on a deep learning model during a video conference operation and based on optical data captured by an optical lens and sensor data resulting from processing the optical data by a sensor, wherein the sensor data includes an image; (Swierk, abstract, “a trained neural network to determine image features for the intelligent face framing management system.” See also, title “… for videoconferencing applications”)
generating a rectangular representation of a face of the user from the image, wherein the face of the user include points representing eye corners based on the facial landmarks detected using the deep learning model during the video conference operation; (Swierk, abstract, “a trained neural network to determine image features for the intelligent face framing management system.” Swierk’s framing teaches the claimed rectangular representation.)
estimating a head pose of the user based on the facial landmarks that includes eye corners, wherein the estimating of the head pose includes drawing a line through the points representing the eye corners; (Swierk, [0121] “For example, one method may include use of an image recognition system to identify the user's eyes and mouth within the captured test videoframes, and to calculate the degree to which the user's head is rotated away from center based on the distances between the corners of the user's eyes and mouth.” Swierk, [0121] explicitly mentions centers of the eyes (rather than corners), but to the extent there is a difference, these are known substitutes. MPEP 2144.06(II).)
determining adjustment information based on the head pose of the user relative to the view zone of the camera, (Swierk, [0121] “Using any of these distance measurements, the firmware for the peripheral cameras may determine gaze and head orientation vectors of a user's image indicating a degree to which the user's head is looking away from the camera capturing that test videoframe of the user.” Swierk’s degree to which is looking away teaches the claimed adjustment information)
Swierk is not relied on for the below claim limitations.
However, Mimar teaches:
wherein the adjustment information includes instructions to turn a head of the user such that the head pose of the user is at zero degree angle relative to a horizontal axis of the view zone of the camera based on the rectangular representation of the face of the user; and (Mimar, Fig. 32, “Head Roll Angle > A Constant?”)
if the head pose of the user is positioned at an angle greater than zero degrees relative to the horizontal axis of the view zone of the camera, then providing audible guidance based on the adjustment information. (Mimar, [0081] “FIG. 27 shows an embodiment of the present invention that also uses adaptive adjustment of center gaze point automatically without any human involved calibration.” (for the claimed camera adjustment) and Fig. 32 shown below (for the claimed audible guidance))
PNG
media_image1.png
137
288
media_image1.png
Greyscale
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the teachings of Mimar to the teachings of Swierk such that Mimar’s feedback is provided to the user of Swierk to monitor for drowsiness and distraction. Mimar, abstract. Note that Swierk also discloses a desire to minimize distraction, see, e.g., [0133].
Based on the above, this is an example of “combining prior art elements according to known methods to yield predictable results.” MPEP 2143.
2. The method of claim 1, wherein the deep learnnig model is used to estimate the head pose. (Swierk, Fig. 7, step 714 and [0122] “For example, if the user's gaze is oriented at 20% to the right of a first camera, and oriented at 5% to the left of a second camera in an embodiment, the intelligent face framing management system may identify the second camera as the gaze-centered camera.” Swierk’s intelligent face framing teaches the claimed deep learning model, see, e.g., the neural network taught in the abstract as part of the intelligent face framing management system.)
3. The method of claim 1, further comprising estimating an upper body pose of the user based on key points. (Mimar, Fig. 47)
4. The method of claim 3, further comprising if the upper body of the user is rotated, then providing another audible guidance to the user. (Mimar, Fig. 32 shown for claim 1)
5. The method of claim 1, wherein the audible guidance includes to turn the head of the user towards either left or right. (Mimar, Fig. 24. Mimar’s largest box teaches that an incorrect pitch, yaw or roll is detected as drowsy, thus, to fix this, one would move their head to be paying attention (e.g., Fig. 19), and this teaches the claimed guidance to turn.)
6. The method of claim 1, wherein the audible guidance is a beeping sound. (Mimar, [0134] “a driver alert is issued by a beep tone referred to as chime”)
7. The method of claim 6, wherein loudness of the beeping sound is based on the angle of the head pose of the user relative to a vertical axis of the view zone of the camera. (Mimar, [0201] “If the head roll angle exceeds a threshold constant in the left or right direction, a more intrusive drowsiness warning sound is generated. If the head roll angle is with normal limits of daily use, then a lesser level and type of sound alert is issued.”)
Claims 8-14 are rejected as per the corresponding method claims. Mimar, Figs. 2 or 33 teach the claimed processor and memory.
Claims 15-19 are rejected as per the corresponding method claims. Mimar, Figs. 2 or 33 teach the claimed non-transitory computer readable medium.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 12170579 B2 – claim 1 “obtaining information about gaze of a first participant in a video conference while the first participant,”
US 12223769 B2 – “Electronic Device And Operating Method Of Electronic Device For Correcting Error During Gaze Direction Recognition”
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID ORANGE whose telephone number is (571)270-1799. The examiner can normally be reached Mon-Fri, 9-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gregory Morse can be reached at 571-272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DAVID ORANGE/Primary Examiner, Art Unit 2663