Prosecution Insights
Last updated: April 18, 2026
Application No. 18/669,941

JUDGMENT SYSTEM, ELECTRONIC SYSTEM, JUDGMENT METHOD AND DISPLAY METHOD

Non-Final OA §101§103§112
Filed
May 21, 2024
Examiner
FATIMA, UROOJ
Art Unit
2676
Tech Center
2600 — Communications
Assignee
Realtek Semiconductor Corp.
OA Round
1 (Non-Final)
100%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 100% — above average
100%
Career Allow Rate
1 granted / 1 resolved
+38.0% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
16 currently pending
Career history
17
Total Applications
across all art units

Statute-Specific Performance

§101
24.6%
-15.4% vs TC avg
§103
41.5%
+1.5% vs TC avg
§102
12.3%
-27.7% vs TC avg
§112
20.0%
-20.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. TW112132895, filed on 08/30/2023. Information Disclosure Statement The information disclosure statement (IDS) submitted on 05/21/2024, 10/28/2024, and 02/14/2025 has been considered by the examiner. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) are: “feature acquisition module” in claims 1 and 3. “judgment module” in claim 1 “output feature tensor generation module” in claim 7 “prediction modules” in claim 7 “display module” in claim 8 Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. Claim 1 and 3: “feature acquisition module” corresponds to figure 1 element 101 “The judgment system 100 comprises a feature acquisition module 101 and a judgment module 102.” (Paragraph [0012].); “The processing units 1001-1 through 1001-R read corresponding computer programs…Such process forms the judgment system 100 and the electronic system 800…each module of the judgment system 100 and the electronic system 800 may also be implemented using hardware…processing units 1001-1 through 1001-R may be an integrated circuit chip with signal processing capability…The processing units 1001-1 through 1001-R may be general purpose processors…or other programmable logic devices” (Paragraph [0052-0053]. Claim 1: “judgment module” corresponds to corresponds to figure 1 element 102 “The judgment system 100 comprises a feature acquisition module 101 and a judgment module 102.” (Paragraph [0012].); “The processing units 1001-1 through 1001-R read corresponding computer programs…Such process forms the judgment system 100 and the electronic system 800…each module of the judgment system 100 and the electronic system 800 may also be implemented using hardware…processing units 1001-1 through 1001-R may be an integrated circuit chip with signal processing capability…The processing units 1001-1 through 1001-R may be general purpose processors…or other programmable logic devices” (Paragraph [0052-0053]. Claim 7: “output feature tensor generation module” corresponds to figure 4b element 401 the feature acquisition module 101 comprises a neural network module 400… the neural network module 400 comprises an output feature tensor generation module 401 and a prediction module” (Paragraph [0027]). Claim 7: “prediction module” corresponds to figure 4b elements 402-1 through 402-M “the feature acquisition module 101 comprises a neural network module 400… the neural network module 400 comprises an output feature tensor generation module 401 and a prediction module 402-1 through a prediction module 402-M” (Paragraph [0027]). Claim 8: “display module” corresponds to figure 10 element 1004 “The display element 1004 may be for example a liquid crystal display, a plasma display, a computer display (for example, a variable graphics array (VGA) display, a super VGA display or a cathode ray tube display), or a display device of another similar type, but the instant disclosure is not limited” (Paragraph [0051]). Dependent claims 2, 4-6 and 8 are similarly interpreted due to their dependency. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-6, 8-14, and 16 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The limitations, under their broadest reasonable interpretation, cover mental process (concept performed in a human mind, including as observation, evaluation, judgment, opinion, organizing human activity and mathematical concepts and calculations). The independent claims 1 and 9 recite a system and method. This judicial exception is not integrated into a practical application because the steps do not add meaningful limitations to be considered specifically applied to a particular technological problem to be solved .The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the steps of the claimed invention can be done mentally and no additional features in the claims would preclude them from being performed as such except for the generic computer elements at high level of generality (i.e., processor, memory). According to the USPTO guidelines, a claim is directed to non-statutory subject matter if: STEP 1: the claim does not fall within one of the four statutory categories of invention (process, machine, manufacture or composition of matter), or STEP 2: the claim recites a judicial exception, e.g. an abstract idea, without reciting additional elements that amount to significantly more than the judicial exception, as determined using the following analysis: STEP 2A (PRONG 1): Does the claim recite an abstract idea, law of nature, or natural phenomenon? STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application? STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? Using the two-step inquiry, it is clear that the independent claims 1 and 9 are directed to an abstract idea as shown below: STEP 1: Do the claims fall within one of the statutory categories? YES. Independent claims 1 and 9 are directed to a machine and process. STEP 2A (PRONG 1): Is the claim directed to a law of nature, a natural phenomenon or an abstract idea? YES, the claims are directed toward a mental process (i.e. abstract idea). With regard to STEP 2A (PRONG 1), the guidelines provide three groupings of subject matter that are considered abstract ideas: Mathematical concepts – mathematical relationships, mathematical formulas or equations, mathematical calculations; Certain methods of organizing human activity – fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions); and Mental processes – concepts that are practicably performed in the human mind (including an observation, evaluation, judgment, opinion). Independent claims 1 and 9 comprise a mathematical concept that can be practicably performed in the human mind (or generic computers or components configured to perform the method) and, therefore, an abstract idea. Regarding independent claim(s) 1: the limitations recite: A judgment system comprising: a feature acquisition module (generic computer component) configured to receive an image and obtain a first key point coordinate, a second key point coordinate, and a size of a face box of a user based on the image (data gathering); and a judgment module configured to execute following steps (generic computer component): (a) obtaining a judgment value based on an ordinate of the first key point coordinate, an ordinate of the second key point coordinate, and the size of the face box (mathematical concept); and (b) sending a rotation signal in response to that the judgment value satisfies a rotation condition (extra post-solution activity). Regarding independent claim(s) 9: the limitations recite: A judgment method, comprising: (a) receiving an image by a feature acquisition module and obtaining a first key point coordinate, a second key point coordinate, and a size of a face box of a user by the feature acquisition module based on the image (data gathering); and (b) performing following steps by the judgment module (generic computer component): (b1) obtaining a judgment value based on an ordinate of the first key point coordinate, an ordinate of the second key point coordinate, and a size of the face box (mathematical concept); and (b2) sending a rotation signal in response to that the judgment value satisfies a rotation condition (extra post-solution activity). These limitations, as drafted, is a simple process that, under their broadest reasonable interpretation, covers performance of the limitations in the mind or by a human. The Examiner notes that under MPEP 2106.04(a)(2)(I), the courts consider a mathematical concept “as laws of nature, and at other times described these concepts as judicial exceptions without specifying a. particular type of exception.” Mathematical concepts need not be expressed in mathematical symbols because “[w]ords used in a claim operating on data to solve a problem can serve the same purpose as a formula.” In re Grams, 888 F.2d 835, 837 and n.1, 12 USPQ2d 1824, 1826 and n.1 (Fed. Cir. 1989). See: SAP America, Inc. v. InvestPic, LLC, 898 F.3d 1161, 1163, 127 USPQ2d 1597, 1599 (Fed. Cir. 2018) Digitech Image Techs., LLC v. Elecs. for Imaging, Inc., 758 F.3d 1344, 1350, 111 USPQ2d 1717, 1721 (Fed. Cir. 2014) As such, the mathematical concept is simply computing a judgment value based on the ordinate of the key points and the size of a face box The mere nominal recitation that the various steps are being executed by a feature acquisition module, a judgment module does not take the limitations out of the mathematical concept grouping. Thus, the claims recite a mathematic concept. STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application? NO, the claims do not recite additional elements that integrate the judicial exception into a practical application. With regard to STEP 2A (prong 2), whether the claim recites additional elements that integrate the judicial exception into a practical application, the guidelines provide the following exemplary considerations that are indicative that an additional element (or combination of elements) may have integrated the judicial exception into a practical application: an additional element reflects an improvement in the functioning of a computer, or an improvement to other technology or technical field; an additional element that applies or uses a judicial exception to affect a particular treatment or prophylaxis for a disease or medical condition; an additional element implements a judicial exception with, or uses a judicial exception in conjunction with, a particular machine or manufacture that is integral to the claim; an additional element effects a transformation or reduction of a particular article to a different state or thing; and an additional element applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception. While the guidelines further state that the exemplary considerations are not an exhaustive list and that there may be other examples of integrating the exception into a practical application, the guidelines also list examples in which a judicial exception has not been integrated into a practical application: an additional element merely recites the words “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea; an additional element adds insignificant extra-solution activity to the judicial exception; and an additional element does no more than generally link the use of a judicial exception to a particular technological environment or field of use. Independent claims 1 and 9 do not recite any of the exemplary considerations that are indicative of an abstract idea having been integrated into a practical application. Independent claims 1 and 9 discloses a feature acquisition module, a judgment module, and “sending a rotation signal in response to that the judgment value satisfies a rotation condition” which are generic computer components and/or insignificant pre/post-solution extra activity that do not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea in a method. These limitations are recited at a high level of generality (i.e. as a general action or change being taken based on the results of the acquiring step) and amounts to mere post solution actions, which is a form of insignificant extra-solution activity. Further, the claims are claimed generically and are operating in their ordinary capacity such that they do not use the judicial exception in a manner that imposes a meaningful limit on the judicial exception. Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? No, the claims do not recite additional elements that amount to significantly more than the judicial exception. With regard to STEP 2B, whether the claims recite additional elements that provide significantly more than the recited judicial exception, the guidelines specify that the pre-guideline procedure is still in effect. Specifically, that examiners should continue to consider whether an additional element or combination of elements: adds a specific limitation or combination of limitations that are not well-understood, routine, conventional activity in the field, which is indicative that an inventive concept may be present; or simply appends well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, which is indicative that an inventive concept may not be present. Independent claim(s) 1 and 9 do not recite any additional elements that are not well-understood, routine or conventional. The use of a generic computer elements are routine, well-understood and conventional process that is performed by computers. Thus, since independent claims 1 and 9 are: (a) directed toward an abstract idea, (b) do not recite additional elements that integrate the judicial exception into a practical application, and (c) do not recite additional elements that amount to significantly more than the judicial exception, it is clear that independent claims 1 and 9 are not eligible subject matter under 35 U.S.C 101. Regarding claims 2-6, 8, 10-14, and 16: the additional limitations do not integrate the mental process into practical application or add significantly more to the mental process. In detail claims 2-6, and 8 depend on claim 1, and claims 10-14, and 16 depend on claim 9 and add: wherein the first and second key point coordinates are the right and left shoulder (claims 2 and 10). computing the size of the face box by getting the difference of the ordinate points (claims 3 and 11). calculating the judgment value as the ratio of the difference of the key points’ ordinates and the size of the face box and the rotation condition is that when the value is greater than or equal to a default value (claims 4, 5, 12, and 13). wherein a neural network outputs the coordinates of the first and second key points, and outputs the size of the face box (claims 6 and 14). changing the orientation of a display based on a rotation signal (claim 8 and 16). Regarding claim 7: the additional limitation do integrate the mathematical concept into practical application or add significantly more to the mathematic concept. The limitation wherein the neural network module comprises an output feature tensor generation module and a plurality of prediction modules, and the output feature tensor generation module is configured to generate a plurality of output feature tensors having different sizes based on the image; each of the prediction modules is configured to receive a corresponding one of the output feature tensors so as to correspondingly generate an information tensor which corresponds to the corresponding one of the output feature tensors; the information tensor is configured to indicate a location information of the face box, a confidence score information, and a category information as well as a location information of the first key point coordinate and a location information of the second key point coordinate; and the feature acquisition module outputs the first key point coordinate, the second key point coordinate, and the size of the face box of the user based on all of the information tensors generated by the prediction modules.” integrates the mathematical concept into a practical application. Regarding claim 15: the additional limitation do integrate the mathematical concept into practical application or add significantly more to the mathematic concept. The limitation “wherein the neural network module comprises an output feature tensor generation module and a plurality of prediction modules, and the step (a1) comprises: (a11) generating a plurality of output feature tensors having different sizes by the output feature tensor generation module based on the image; (a12) receiving a corresponding one of the output feature tensors by each of the prediction modules so as to correspondingly generate an information tensor which corresponds to the corresponding one of the output feature tensors, wherein the information tensor is configured to indicate a location information of the face box, a confidence score information, and a category information as well as a location information of the first key point coordinate and a location information of the second key point coordinate; and (a13) outputting the first key point coordinate, the second key point coordinate and the size of the face box of the user by the feature acquisition module based on all of the information tensors generated by the prediction modules.” integrates the mathematical concept into a practical application. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 8, 9, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Senechal et al. (US 10,614,289 B2) (hereinafter, “Senechal”) in view of Cheng et al. ("iRotate: automatic screen rotation based on face orientation." Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2012.) (hereinafter, “Cheng”). Regarding claim 1, Senechal discloses a judgment system comprising: a feature acquisition module configured to receive an image and obtain a first key point (facial landmarks in Column 5 [lines 31-39] equate to key point) coordinate, a second key point (facial landmarks in Column 5 [lines 31-39] equate to key point) coordinate (Column 5 [lines 31-39] “100 includes performing face detection to initialize locations 120 for a first set of facial landmarks (i.e. key points) within a first frame from the video. The face detection can be based on other facial points, identifying characteristics, etc. The landmarks can include corners of the mouth, corners of eyes, eyebrow corners, tip of nose, nostrils, chin, tips of ears, distinguishing marks and features, and so on.”; Column 15 [line 32-35] “The learning can include mapping of the x-y coordinates (locations) of the facial landmarks to the coordinates of the bounding box 1030.”), and a size of a face box (dimensions in Column 6 [lines 23-34] equate to size of face box) of a user based on the image (Column 6 [lines 23-34] “The providing of the output of the facial detector can include generating a bounding box 152 for the face. A first bounding box can be generated for a face that is detected in a first frame. The first bounding box can be a square, a rectangle, and/or any other appropriate geometric shape. The first bounding box can be substantially the same as the bounding box generated by a face detector. The first bounding box can be a minimum-dimension bounding box, where the dimension can include area, volume, hyper-volume (i.e. size of face box), and so on. The first bounding box can be generated based on analysis, estimation, simulation, prediction, and so on.”); and a judgment module configured to execute following steps: (a) obtaining a [judgment value] based on an ordinate of the first key point coordinate, an ordinate of the second key point coordinate (Column 15 [line 32-35] “The learning can include mapping of the x-y coordinates (locations) of the facial landmarks to the coordinates of the bounding box 1030.”), and the size of the face box (Column 6 [lines 23-34] “The providing of the output of the facial detector can include generating a bounding box 152 for the face. A first bounding box can be generated for a face that is detected in a first frame. The first bounding box can be a square, a rectangle, and/or any other appropriate geometric shape. The first bounding box can be substantially the same as the bounding box generated by a face detector. The first bounding box can be a minimum-dimension bounding box, where the dimension can include area, volume, hyper-volume (i.e. size of face box), and so on. The first bounding box can be generated based on analysis, estimation, simulation, prediction, and so on.”). However, Senechal fails to teach a judgment value (orientation threshold on Page 2206 left column paragraph 3 equates to judgment value) and sending a rotation signal in response to that the judgment value satisfies a rotation condition. Cheng teaches a judgment value (Page 2206 left column paragraph 3 “we define θ as the angle between device’s x-axis and earth’s horizontal plane, and φ as the angle between device’s y-axis. We experimentally measured the orientation threshold (i.e. judgment value) used by iPhone and iPad, by monitoring the accelerometer readings and rotating the devices as slowly as possible until the screen rotated. We found that the threshold is θ - φ =30, with 2 degrees of dead band, for both iPhone and iPad.”) and sending a rotation signal in response to that the judgment value satisfies a rotation condition (Page 2207 [left column paragraph 4] “Our functional prototype automatically rotates screens to the orientation detected by the face detection API. It counts the number of frames with detected face orientation within a 0.5-second window, and rotates to the most frequently detected orientation. The 0.5-second threshold is the average rotation delay for iPhone and iPad”). Therefore, it would have been obvious to one of ordinary skill of the art before the effective filing date to modify Senechal’s reference to include a judgment value and sending a rotation signal in response to that the judgment value satisfies a rotation condition taught by Cheng’s reference. The motivation for doing so would have been to auto rotate a screen of a device using the orientation threshold based on the face orientation as suggested by Cheng’s (see Cheng, Page 2206 left column paragraphs 2 and 3). Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine Cheng with Senechal to obtain the invention specified in claim 1. Regarding claim 8, which claim 1 is incorporated, Senechal fails to teach a display module configured to change an orientation direction of a screen-displayed content in response to that the display module receives the rotation signal. Cheng teaches a display module configured to change an orientation direction of a screen-displayed content in response to that the display module receives the rotation signal (Page 2207 [left column paragraph 4] “Our functional prototype automatically rotates screens to the orientation detected by the face detection API. It counts the number of frames with detected face orientation within a 0.5-second window, and rotates to the most frequently detected orientation. The 0.5-second threshold is the average rotation delay for iPhone and iPad”). Therefore, it would have been obvious to one of ordinary skill of the art before the effective filing date to modify Senechal’s reference to include a display module configured to change an orientation direction of a screen-displayed content in response to that the display module receives the rotation signal taught by Cheng’s reference. The motivation for doing so would have been to automatically rotate a screen of a device to match the face orientation as suggested by Cheng’s (see Cheng, Page 2206 left column paragraphs 2 and 3). Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine Cheng with Senechal to obtain the invention specified in claim 8. Regarding claim 9, Senechal discloses a judgment method, comprising: (a) receiving an image by a feature acquisition module and obtaining a first key point (facial landmarks in Column 5 [lines 31-39] equates to key points) coordinate, a second key point (facial landmarks in Column 5 [lines 31-39] equates to key points) coordinate (Column 5 [lines 31-39] “100 includes performing face detection to initialize locations 120 for a first set of facial landmarks (i.e. key points) within a first frame from the video. The face detection can be based on other facial points, identifying characteristics, etc. The landmarks can include corners of the mouth, corners of eyes, eyebrow corners, tip of nose, nostrils, chin, tips of ears, distinguishing marks and features, and so on.”; Column 15 [line 32-35] “The learning can include mapping of the x-y coordinates (locations) of the facial landmarks to the coordinates of the bounding box 1030.”), and a size of a face box (dimensions in Column 6 [lines 23-34] equates to size of face box) of a user by the feature acquisition module based on the image (Column 6 [lines 23-34] “The providing of the output of the facial detector can include generating a bounding box 152 for the face. A first bounding box can be generated for a face that is detected in a first frame. The first bounding box can be a square, a rectangle, and/or any other appropriate geometric shape. The first bounding box can be substantially the same as the bounding box generated by a face detector. The first bounding box can be a minimum-dimension bounding box, where the dimension can include area, volume, hyper-volume (i.e. size of face box), and so on. The first bounding box can be generated based on analysis, estimation, simulation, prediction, and so on.”); and (b) performing following steps by the judgment module: (b1) obtaining a [judgment value] based on an ordinate of the first key point coordinate, an ordinate of the second key point coordinate (Column 15 [line 32-35] “The learning can include mapping of the x-y coordinates (locations) of the facial landmarks to the coordinates of the bounding box 1030.”), and a size of the face box (Column 6 [lines 23-34] “The providing of the output of the facial detector can include generating a bounding box 152 for the face. A first bounding box can be generated for a face that is detected in a first frame. The first bounding box can be a square, a rectangle, and/or any other appropriate geometric shape. The first bounding box can be substantially the same as the bounding box generated by a face detector. The first bounding box can be a minimum-dimension bounding box, where the dimension can include area, volume, hyper-volume (i.e. size of face box), and so on. The first bounding box can be generated based on analysis, estimation, simulation, prediction, and so on.”). However, Senechal fails to teach a judgment value and sending a rotation signal in response to that the judgment value satisfies a rotation condition. Cheng teaches a judgment value (orientation threshold on Page 2206 left column paragraph 3 equates to judgment value) (Page 2206 left column paragraph 3 “we define θ as the angle between device’s x-axis and earth’s horizontal plane, and φ as the angle between device’s y-axis. We experimentally measured the orientation threshold (i.e. judgment value) used by iPhone and iPad, by monitoring the accelerometer readings and rotating the devices as slowly as possible until the screen rotated. We found that the threshold is θ - φ =30, with 2 degrees of dead band, for both iPhone and iPad.”) and sending a rotation signal in response to that the judgment value satisfies a rotation condition (Page 2207 [left column paragraph 4] “Our functional prototype automatically rotates screens to the orientation detected by the face detection API. It counts the number of frames with detected face orientation within a 0.5-second window, and rotates to the most frequently detected orientation. The 0.5-second threshold is the average rotation delay for iPhone and iPad”). Therefore, it would have been obvious to one of ordinary skill of the art before the effective filing date to modify Senechal’s reference to include a judgment value and sending a rotation signal in response to that the judgment value satisfies a rotation condition taught by Cheng’s reference. The motivation for doing so would have been to auto rotate a screen of a device using the orientation threshold based on the face orientation as suggested by Cheng’s (see Cheng, Page 2206 left column paragraphs 2 and 3). Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine Cheng with Senechal to obtain the invention specified in claim 9. Regarding claim 16 (drawn to a method), claim 16 is rejected the same as claim 8 and the arguments similar to that presented above for claim 8 are equally applicable to the claim 16, and all the other limitations similar to claim 8 are not repeated herein, but incorporated by reference. Claims 2 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Senechal et al. (US 10614289 B2) (hereinafter, “Senechal”) in view of Cheng et al. ("iRotate: automatic screen rotation based on face orientation." Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2012.) (hereinafter, “Cheng”), and further in view of Huang (US 2023/0237694 A1). Regarding claim 2, Senechal and Cheng fails to teach wherein the first key point coordinate is a coordinate of a right shoulder point of the user, and the second key point coordinate is a coordinate of a left shoulder point of the user. Huang teaches wherein the first key point coordinate is a coordinate of a right shoulder point of the user, and the second key point coordinate is a coordinate of a left shoulder point of the user (Paragraph [0011] “obtaining left-right shoulder relation information according to bone coordinates at left and right shoulders of the human body”). Therefore, it would have been obvious to one of ordinary skill of the art before the effective filing date to modify Senechal in view of Cheng to include wherein the first key point coordinate is a coordinate of a right shoulder point of the user, and the second key point coordinate is a coordinate of a left shoulder point of the user taught by Huang’s reference. The motivation for doing so would have been to determine the position of a person based on the shoulder information as suggested by Huang (see Huang, Paragraph [0011]). Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine Cheng and Huang with Senechal to obtain the invention specified in claim 2. Regarding claim 10 (drawn to a method), claim 10 is rejected the same as claim 2 and the arguments similar to that presented above for claim 2 are equally applicable to the claim 10, and all the other limitations similar to claim 2 are not repeated herein, but incorporated by reference. Claims 3 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Senechal et al. (US 10,614,289 B2) (hereinafter, “Senechal”) in view of Cheng et al. ("iRotate: automatic screen rotation based on face orientation." Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2012.) (hereinafter, “Cheng”), and further in view of Zutshi (US 10,525,599 B1). Regarding claim 3, Senechal discloses a face box (Column 6 [lines 23-34] “The providing of the output of the facial detector can include generating a bounding box 152 for the face…The first bounding box can be generated based on analysis, estimation, simulation, prediction, and so on.”). However, Senechal and Cheng fail to teach wherein the feature acquisition module is configured to obtain the size of the [face] box based on following steps: subtracting an ordinate of a lower right point coordinate of the face box from an ordinate of an upper left point coordinate of the face box so as to obtain a difference; and setting the size of the [face] box as the difference. Zutshi teaches wherein the feature acquisition module is configured to obtain the size of the [face] box based on following steps (Column 6 [lines 9-14] “the system may first determine the size thresholds (e.g., minimum width, minimum height, maximum width, and/or maximum height) based on the model number or other identifier associated with the mobile device 104, and apply the size thresholds on the bounding boxes.”): subtracting an ordinate of a lower right point coordinate of the face box from an ordinate of an upper left point coordinate of the face box so as to obtain a difference; and setting the size of the [face] box as the difference (Column 8 [lines 2-9] “the pixel at the top-right corner of the bounding box may have coordinate values of (900, 1200), such that the bounding box has a width of 600 pixels (e.g., the difference between the x-coordinate values of the two pixels at the bottom-left and top-right corners) and a height of 900 pixels (e.g., the difference between the y-coordinate values of the two pixels).”). Therefore, it would have been obvious to one of ordinary skill of the art before the effective filing date to modify Senechal in view of Cheng to include wherein the feature acquisition module is configured to obtain the size of the [face] box based on following steps: subtracting an ordinate of a lower right point coordinate of the face box from an ordinate of an upper left point coordinate of the face box so as to obtain a difference; and setting the size of the [face] box as the difference taught by Zutshi’s reference. The motivation for doing so would have been to filter out contours that do not reach a predetermined size value as suggested by Zutshi (see Zutshi, Column 5 [lines 47-53]). Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine Cheng and Zutshi with Senechal to obtain the invention specified in claim 3. Regarding claim 11 (drawn to a method), claim 11 is rejected the same as claim 3 and the arguments similar to that presented above for claim 3 are equally applicable to the claim 11, and all the other limitations similar to claim 3 are not repeated herein, but incorporated by reference. Claims 6, 7, 14 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Senechal et al. (US 10,614,289 B2) (hereinafter, “Senechal”) in view of Cheng et al. ("iRotate: automatic screen rotation based on face orientation." Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2012.) (hereinafter, “Cheng”), and further in view of Zeng et a; ("Proposal pyramid networks for fast face detection." Information Sciences 495 (2019): 136-149.) (hereinafter, “Zeng”). Regarding claim 6, which claim 1 is incorporated, Senechal teaches wherein the feature acquisition module comprises a neural network module (Column 10 [lines 25-33] “Classifiers can be binary, multiclass, linear and so on. Algorithms for classification can be implemented using a variety of techniques including neural networks, kernel estimation, support vector machines, use of quadratic surfaces, and so on. Classification can be used in many application areas such as computer vision, speech and handwriting recognition, and so on. Classification can be used for biometric identification of one or more people in one or more frames of one or more videos.”), and the neural network module is configured to receive the image and output the first key point (facial landmarks in Column 7 [lines 34-39] equates to key points) coordinate and the second key point coordinate (facial landmarks in Column 7 [lines 34-39] equates to key points) of the user (Column 7 [lines 34-39] “The flow 100 includes analyzing the face using a plurality of classifiers 175. The face that is analyzed can be the first face, the second face, the third face, and so on. The face can be analyzed to determine facial landmarks, facial features, facial points, and so on. The classifiers can be used to determine facial landmarks”). However, Senechal and Cheng fail to teach output the size of the face box of the user. Zeng teaches output the size of the face box of the user (Page 141, Subsection 3.2 “We regress relative offsets of bounding boxes instead of absolute coordinates. Offsets are denoted by [ Δl , Δt , Δr , Δb ]: PNG media_image1.png 137 492 media_image1.png Greyscale (Examiner interprets equations wg and hg refer to the size of the face box). Therefore, it would have been obvious to one of ordinary skill of the art before the effective filing date to modify Senechal in view of Cheng to include output the size of the face box of the user taught by Zeng’s reference. The motivation for doing so would have been to output a feature map representing the probability of containing a face in a detection window on the input image as suggested by Zeng (see Zeng, Page 139 subsection 3.1 paragraph 2). Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine Cheng and Zeng with Senechal to obtain the invention specified in claim 6. Regarding claim 7, which claim 6 is incorporated, Senechal and Cheng fail to teach wherein the neural network module comprises an output feature tensor generation module and a plurality of prediction modules, and the output feature tensor generation module is configured to generate a plurality of output feature tensors having different sizes based on the image; each of the prediction modules is configured to receive a corresponding one of the output feature tensors so as to correspondingly generate an information tensor which corresponds to the corresponding one of the output feature tensors; the information tensor is configured to indicate a location information of the face box, a confidence score information, and a category information as well as a location information of the first key point coordinate and a location information of the second key point coordinate; and the feature acquisition module outputs the first key point coordinate, the second key point coordinate, and the size of the face box of the user based on all of the information tensors generated by the prediction modules. Zeng teaches teach wherein the neural network module comprises an output feature tensor generation module and a plurality of prediction modules, (Page 139, Subsection 3.1 Paragraph 1 “PPN is a fully-convolutional network (FCN) with 11 branches, consisting of convolutional layers, PReLU [7] activation layers and Softmax normalization layers…Each pixel on this output feature map represents the probability of containing a face within an 8 × 8 detection window on the input image.” and the output feature tensor generation module is configured to generate a plurality of output feature tensors having different sizes based on the image (Figure 1 Captions “Fig. 1. The network structure of PPN . It takes a single image as input and generates multi-scale face proposals simultaneously via multiple branches in a pyramid manner.”); each of the prediction modules is configured to receive a corresponding one of the output feature tensors so as to correspondingly generate an information tensor which corresponds to the corresponding one of the output feature tensors (Page 139 Section 3 paragraph 1 “The overall pipeline is shown in Fig. 2 . The first stage is the Proposal Pyramid Network (PPN) to generate multi-scale face proposals. The second stage named RNet-24 and the third stage named RNet-48 are both dual-task networks, which are used to refine proposals from PPN and predict offsets of corresponding bounding boxes.”); the information tensor is configured to indicate a location information of the face box (Page 139 subsection 3.1 “Each pixel on this output feature map represents the probability of containing a face within an 8 ×8 detection window on the input image. Actually, the process described above is equivalent to sliding a 8 ×8 window on the input image with a stride of 2.”), Figure 1 PNG media_image2.png 343 993 media_image2.png Greyscale a confidence score information (Page 137, paragraph 2 “Taking a single image with arbitrary size as input, each branch will generate a probability map, in which each element represents the probability that whether a specified size window on the input image contains a face.”), and a category information as well as a location information of the first key point (top left coordinates on Page 141, Subsection 3.2 equates to first key points) coordinate and a location information of the second key point (bottom right coordinates on Page 141, Subsection 3.2 equates to second key points) coordinate (Page 141, Subsection 3.2 “We regress relative offsets of bounding boxes instead of absolute coordinates. Offsets are denoted by [ Δl , Δt , Δr , Δb ]: PNG media_image3.png 92 330 media_image3.png Greyscale where [(x p l , y p t ) , (x p r , y p b )] denote the top left coordinates (i.e. first key point) and bottom right coordinates (i.e. second key point) of the proposal box respectively, [(x g l , y g t ) , (x g r , y g b )] denote the top left coordinates and bottom right coordinates of the ground truth box respectively.”); and the feature acquisition module outputs the first key point coordinate, the second key point coordinate (Page 141, Subsection 3.2 “We regress relative offsets of bounding boxes instead of absolute coordinates. Offsets are denoted by [ Δl , Δt , Δr , Δb ]: PNG media_image3.png 92 330 media_image3.png Greyscale where [(x p l , y p t ) , (x p r , y p b )] denote the top left coordinates (i.e. first key point) and bottom right coordinates (i.e. second key point) of the proposal box respectively, [(x g l , y g t ) , (x g r , y g b )] denote the top left coordinates and bottom right coordinates of the ground truth box respectively.), and the size of the face box of the user based on all of the information tensors generated by the prediction modules. PNG media_image1.png 137 492 media_image1.png Greyscale (Examiner interprets equations wg and hg refer to the size of the face box). Therefore, it would have been obvious to one of ordinary skill of the art before the effective filing date to modify Senechal in view of Cheng to include wherein the neural network module comprises an output feature tensor generation module and a plurality of prediction modules, and the output feature tensor generation module is configured to generate a plurality of output feature tensors having different sizes based on the image; each of the prediction modules is configured to receive a corresponding one of the output feature tensors so as to correspondingly generate an information tensor which corresponds to the corresponding one of the output feature tensors; the information tensor is configured to indicate a location information of the face box, a confidence score information, and a category information as well as a location information of the first key point coordinate and a location information of the second key point coordinate; and the feature acquisition module outputs the first key point coordinate, the second key point coordinate, and the size of the face box of the user based on all of the information tensors generated by the prediction modules taught by Zeng’s reference. The motivation for doing so would have been to use a network to generate face candidates extremely fast and reduces the major computational complexity as suggested by Zeng (see Zeng, Abstract). Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine Cheng and Zeng with Senechal to obtain the invention specified in claim 7. Regarding claim 14 (drawn to a method), claim 14 is rejected the same as claim 6 and the arguments similar to that presented above for claim 6 are equally applicable to the claim 14, and all the other limitations similar to claim 6 are not repeated herein, but incorporated by reference. Regarding claim 15 (drawn to a method), claim 15 is rejected the same as claim 7 and the arguments similar to that presented above for claim 7 are equally applicable to the claim 15, and all the other limitations similar to claim 7 are not repeated herein, but incorporated by reference. Allowable Subject Matter Claims 4, 5, 12, and 13 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims. Claims 4 and 5 contain subject matter that is not disclosed or made obvious in the cited art: In regard to claim 4, when considering claim 4 as a whole, prior art of record fail to discloses pr render obvious, alone or in combination: “wherein the step (a) comprises: calculating an absolute value of a difference between the ordinate of the second key point coordinate and the ordinate of the first key point coordinate; and setting the judgment value as a ratio of the absolute value of the difference over the size of the face box.” In regard to claim 5, when considering claim 5 as a whole, prior art of record fail to disclose or render obvious, alone or in combination: “wherein the rotation condition is that the judgment value is greater than or equal to a default value.” In regard to claim 12, when considering claim 12 as a whole, prior art of record fail to discloses pr render obvious, alone or in combination: “wherein the step (b1) comprises: calculating an absolute value of a difference between the ordinate of the second key point coordinate and the ordinate of the first key point coordinate; and setting the judgment value as a ratio of the absolute value of the difference over the size of the face box.” In regard to claim 13, when considering claim 13 as a whole, prior art of record fail to discloses pr render obvious, alone or in combination: “wherein the rotation condition is that the judgment value is greater than or equal to a default value.” Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Eslami (US 20230177871 A1) discloses a device used for face detection based on the coordinates of key-points. The coordinates are then used to determine a bounding box around the region containing a face. Chandra et al. (US 11887252 B1) discloses that generates an accurate model of a body and updates the model based on the face image of the face of the user. Kaminsky et al. ("Calculation of the exact value of the fractal dimension in the time series for the box-counting method." 2019 9th International Conference on Advanced Computer Information Technologies (ACIT). IEEE, 2019.) discloses determining the fractal dimension of a time series using a box-counting method, wherein the box size is scaled to the signal values to achieve a more accurate estimation. Any inquiry concerning this communication or earlier communications from the examiner should be directed to UROOJ FATIMA whose telephone number is (571)272-2096. The examiner can normally be reached M-F 8:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henok Shiferaw can be reached at (571) 272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /UROOJ FATIMA/Examiner, Art Unit 2676 /Henok Shiferaw/Supervisory Patent Examiner, Art Unit 2676
Read full office action

Prosecution Timeline

May 21, 2024
Application Filed
Apr 02, 2026
Non-Final Rejection — §101, §103, §112 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
100%
Grant Probability
99%
With Interview (+100.0%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 1 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month