Prosecution Insights
Last updated: April 19, 2026
Application No. 18/456,929

DETECTING METHOD, DETECTING DEVICE, AND RECORDING MEDIUM

Final Rejection §101§103§112
Filed
Aug 28, 2023
Examiner
ELLIOTT, JORDAN MCKENZIE
Art Unit
2666
Tech Center
2600 — Communications
Assignee
Casio Computer Co. Ltd.
OA Round
2 (Final)
45%
Grant Probability
Moderate
3-4
OA Rounds
2y 10m
To Grant
31%
With Interview

Examiner Intelligence

Grants 45% of resolved cases
45%
Career Allow Rate
9 granted / 20 resolved
-17.0% vs TC avg
Minimal -14% lift
Without
With
+-13.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
40 currently pending
Career history
60
Total Applications
across all art units

Statute-Specific Performance

§101
8.9%
-31.1% vs TC avg
§103
53.3%
+13.3% vs TC avg
§102
27.1%
-12.9% vs TC avg
§112
10.7%
-29.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 20 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Claims 1-14 are pending in this application, claims 1, 4-5 and 13-14 have been amended. Claims 1-14 have been examined under the priority date 08/29/2022. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statements (IDS) submitted on 08/28/2023 and 10/10/2024 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Response to Arguments 35 U.S.C. 112(b) Applicant’s arguments (see Remarks, filed 11.12.2025) have been fully considered and are persuasive. In view of the amendments made to claim 1, 13 and 14, the rejections made under 35 U.S.C. 112(b) have been withdrawn. 35 U.S.C. 101 Applicant’s arguments (see Remarks, filed 11.12.2025) regarding the rejections made under 35 U.S.C. 101 have been fully considered and are not persuasive. Taking claim 1 as example, claim 1 recites the following limitations which could each be reasonably classified as either a mental process or steps of mere data gathering; “A detecting method executed by at least one processor, comprising: acquiring depth information related to depth of a reference surface of a component and a target object (Mental process which could reasonably be completed by a person looking at an image, and collecting a measurement); deriving a distance from the reference surface to a representative point of the target object a direction within +- 10 degrees of a normal to the reference surface based on the depth information (Mental process which could be performed by a human visually assessing a spatial relationship and measuring a distance); and detecting the target object as a detection target, upon the distance satisfying a predetermined distance condition (Mental process of looking at an image and assessing the distance of an object in an image and verifying that is sufficient); and determining a gesture of the detection target, if the detection target has been detected from a plurality of candidates, without determining a gesture of any remainder of the plurality of candidates. (Mental process which could be performed by a human where a gesture is assessed visually and determined) Under step 2A prong 1, the limitations recited are drawn to abstract ideas, mental processes or steps of mere data gathering as noted above by the examiner, the examiner has added further elaboration on why each step is drawn to a mental process or step of mere data gathering for clarity of record. Further, under step 2B prong 2, the claim recites the additional elements of “at least one processor” (claims 1, 13, and 14) and “non-transitory computer recording medium” (claim 14), which neither constitute judicial exception nor integrate claim into practical application. Under step 2B, the claim does not include elements that amount to significantly more than an abstract idea (See MPEP section 2106). Therefore, for at least the reasons stated above, the examiner maintains the rejections made under 35 U.S.C. 101. The examiner respectfully encourages the applicant to amend the claims to further clarify the components or analysis which perform the actions above to further translate the claim into practical application. 35 U.S.C. 102 Applicant’s arguments (see Remarks, filed 11.12.2025) regarding the rejections made under 35 U.S.C. 102(a)(1) have been fully considered and are persuasive in view of the amendments made. However, due to the change of scope to claims 1, 13 and 14, a new grounds of rejection is made over Suzuki, and in further view of Oshima and fully discussed below. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-14 are rejected under U.S.C. 101 as being directed to an abstract idea without significantly more. Regarding claims 1, 13 and 14, are directed to abstract ideas, mental processes or steps of mere data gathering. Specifically, the claims recite the limitations; A detecting method executed by at least one processor, comprising: acquiring depth information related to depth of a reference surface of a component and a target object (Mental process which could reasonably be completed by looking at an image) deriving a distance from the reference surface to a representative point of the target object a direction within +/- 10 degrees of a normal to the reference surface based on the depth information (Mental process which could be performed by a human); and detecting the target object as a detection target, upon the distance satisfying a predetermined distance condition (Mental process of looking at an image and assessing the distance of an object in an image); and determining a gesture of the detection target, if the detection target has been detected from a plurality of candidates, without determining a gesture of any remainder of the plurality of candidates. (Mental process which could be performed by a human where a gesture is assessed visually) Under step 2A prong 1, the limitations recited are drawn to abstract ideas, mental processes or steps of mere data gathering as noted above. Further, under step 2B prong 2, the claims recite the additional elements of “at least one processor” (claims 1, 13, and 14) and “non-transitory computer recording medium” (claim 14), which neither constitute judicial exception nor integrate claim into practical application. Under step 2B, the claim does not include elements that amount to significantly more than an abstract idea. See MPEP section 2106. Dependent claims 2-12 do not add limitations that meaningfully translate the abstract idea into a practical application or add significantly more. Regarding claim 2, the claim adds the limitations; acquiring a captured image of the reference surface and the target object (Data gathering), wherein the deriving includes deriving the distance based on the captured image and the depth information (Mental process of deriving which a human could perform in their mind). The limitations recited by claim 2 constitute a mental process or steps of mere data gathering without significantly more. Regarding claim 3, the claim adds the limitations; acquiring a captured image in which the reference surface and the target object are captured (Data gathering); and extracting the target object in the captured image, at least a part of the target object overlapping the reference surface (Mental process of determination of an object being in an image), wherein the deriving includes deriving the distance based on the depth information, the distance being from the reference surface to the representative point of the target object that is extracted in the extracting (Mental process of deriving a distance mathematically). The limitations recited by claim 3 constitute a mental process or steps of mere data gathering without significantly more. Regarding claim 4, the claim adds the limitations; determining that the distance satisfies the predetermined distance condition upon the distance that is derived in the deriving being greater than or equal to a standard distance (Mental process of making a determination). The limitations recited by claim 4 constitute a mental process without significantly more. Regarding claim 5, the claim adds the limitations; wherein, upon the target object including two or more target objects, the deriving includes deriving distances from the reference surface to respective representative points of the two or more target objects, and wherein the detecting includes detecting the target object as the detection target, the distance from the reference surface to the representative point of the target object being a longest of the distances that satisfy the predetermined distance condition (Mental process of making a determination). The limitations recited by claim 5 constitute a mental process without significantly more. Regarding claim 6, the claim adds the limitations; wherein the representative point is a centroid of a portion of a candidate of the detection target that overlaps the reference surface in the captured image (Mental process of making a determination). The limitations recited by claim 6 constitute a mental process without significantly more. Regarding claim 7, the claim adds the limitations; wherein the detection target is a hand of a person, and wherein the component is held by the hand (Mental process of making a determination of image contents). The limitations recited by claim 7 constitute a mental process without significantly more. Regarding claim 8, the claim adds the limitations; wherein the detection target is a hand of a person, and wherein the captured image is an image of an imaging region including the hand of the person who is located on a side of the reference surface of the component (Mental process of making a determination of image contents). The limitations recited by claim 8 constitute a mental process without significantly more. Regarding claim 9, the claim adds the limitations; wherein the reference surface of the component is an image display surface on which an image is displayed (Mental process of making a determination of image contents). The limitations recited by claim 9 constitute a mental process without significantly more. Regarding claim 10, the claim adds the limitations; extracting a planar rectangular region with a constant depth or a continuously changing depth as the reference surface based on the depth information (Data gathering), The limitations recited by claim 10 constitute a mental process or steps of mere data gathering without significantly more. Regarding claim 11, the claim adds the limitations; acquiring a captured image of the reference surface of the component and the target object (Data gathering); and identifying the reference surface based on a position of a sign in the captured image, the sign being at a predetermined position of the component (Mental process of identifying image contents or position of an object in an image). The limitations recited by claim 11 constitute a mental process or steps of mere data gathering without significantly more. Regarding claim 12, the claim adds the limitations; acquiring a captured image of the reference surface and the target object (Data gathering); and identifying the reference surface based on a position of a sign in the captured image, the sign being at a predetermined position in an image displayed on the reference surface (Mental process of identifying image contents or position of an object in an image). The limitations recited by claim 12 constitute a mental process or steps of mere data gathering without significantly more. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 1 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites the limitation “approximate normal” which renders the claim indefinite because approximate does not definitively claim whether the distance is normal to the surface or not. For the purposes of examination, Examiner is interpreting this limitation as the distance being normal to the surface, applicant is encouraged to amend to further clarify. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 1. Claims 1-14 are rejected under 35 U.S.C. 103 as being unpatentable over Suzuki (US 20140292723 A1) in view of Oshima (US 20160054859 A1). Regarding claim 1 Suzuki teaches; A detecting method executed by at least one processor, comprising (Suzuki, [0005] the device includes a processor): acquiring depth information related to depth of a reference surface of a component and a target object (Suzuki, [0057] a position relation between the hand (target object) and the image projected by the projector (reference surface) is determined); [deriving a distance from the reference surface to a representative point of the target object a direction within +/- 10 degrees of a normal to the reference surface based on the depth information;] detecting the target object as a detection target, upon the distance satisfying a predetermined distance condition (Suzuki, [0084] the hand must be a certain distance from the projector surface to be detected, if this distance is not met the system will generate an alert). and determining a gesture of the detection target (Suzuki, [0066] the gesture is determined based on the shape of the hand and fingers and their change in shape between gestures), if the detection target has been detected from a plurality of candidates, without determining a gesture of any remainder of the plurality of candidates (Suzuki, [0094] the gesture is only determined if it’s one of the predetermined gestures stored, if not, the gesture is not determined). Suzuki does not teach; deriving a distance from the reference surface to a representative point of the target object a direction within +- 10 degrees of a normal to the reference surface based on the depth information; However, in the same field of endeavor, Oshima teaches; deriving a distance from the reference surface to a representative point of the target object a direction within +- 10 degrees of a normal to the reference surface based on the depth information (Oshima, [0098] the fingertip touch positions (target object) from the flat surface (reference surface) are derived and [0100] vectors of the position are derived using orthogonal projections where the fingertip contact on the surface is an orthogonal vector, meaning the positional relationship/distance is derived as a distance perpendicular to the reference surface. Given that the distance is perpendicular to the flat surface, therefore the distance relationship derived is within 10 degrees of the normal to the reference surface because it is normal to the reference surface); The combination of Suzuki and Oshima would have obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. Suzuki teaches a method of determining hand gestures and distances of the hand in relation to a flat surface, it does not teach that the target object or the hand is orthogonal to a flat surface. Oshima, teaches this limitation, and this feature would be advantageous to add to the system of Suzuki because determining hand and finger contact points and distance in relation to a flat surface requires reference points, and defining the plants as being perpendicular to one another allows the system to more accurately assess the vertical distance between the hand and the flat surface. (Oshima, [0032]- [0033] and [0054]) Regarding claim 2 the combination of Suzuki and Oshima teaches; The detecting method according to claim 1, further comprising: acquiring a captured image of the reference surface and the target object (Suzuki, [0057] an image is captured of between the hand (target object) and the image projected by the projector (reference surface)), wherein the deriving includes deriving the distance based on the captured image and the depth information (Suzuki, [0057] a positional relation between the hand (target object) and the image projected by the projector (reference surface) is determined, this is based off an image captured of the hand and projector surface). Regarding claim 3 the combination of Suzuki and Oshima teaches; The detecting method according to claim 1, further comprising: acquiring a captured image in which the reference surface and the target object are captured (Suzuki, [0057] an image is captured of between the hand (target object) and the image projected by the projector (reference surface); and extracting the target object in the captured image, at least a part of the target object overlapping the reference surface (Suzuki, [0057] an image is captured of the hand (target object) and the image projected by the projector (reference surface), Figures 6A and 6B show the hand over the projector surface, where there is at least partial overlap, [0057] describes this), wherein the deriving includes deriving the distance based on the depth information (Suzuki, [0097] the projection of the image is based on the position of the hand (target object) and the border of the area (target area, this is the area the projector is projecting the image onto), Figure 12 and 13 A-C, show the position of the center of the hand (representatives point) being used, [0065] the center of the back of the hand may be used when determining the 3D coordinates of the hand), the distance being from the reference surface to the representative point of the target object that is extracted in the extracting (Suzuki, [0097] the projection of the image is based on the position of the hand (target object) and the border of the area (target area, this is the area the projector is projecting the image onto), Figure 12 and 13 A-C, show the position of the center of the hand (representatives point) being used, [0065] the center of the back of the hand may be used when determining the 3D coordinates of the hand, given that the center of the back of the hand can be used for the hand coordinates, this would mean the center of the hand (reference point) could be used to determine the distance from the projector surface). Regarding claim 4, the combination of Suzuki and Oshima teaches; The detecting method according to claim 1, further comprising: determining that the distance satisfies the distance condition upon the predetermined distance that is derived in the deriving being greater than or equal to a standard distance (Suzuki, [0083] the display unit displays vertical and lateral alert lines to verify the distance conditions for the hand are met, [0084] the hand must be a certain distance or more from the projector surface to be detected, if this distance is not met the system will generate an alert using alert lines, further [0069] details that here is a minimum (predetermined) distance at which the hand may be captured). Regarding claim 5, the combination of Suzuki and Oshima teaches; The detecting method according to claim 1, wherein, upon the target object including two or more target objects, the deriving includes deriving distances from the reference surface to respective representative points of the two or more target objects (Suzuki, [0065] the measuring unit measures the 3D coordinates of the manipulating object or objects, such as the hand and fingers (Detection of two or more target objects), coordinates may be detected for all finger tips (multiple reference points), [0057] a positional relation between the hand (target object) and the image projected by the projector (reference surface) is determined, this is based off an image captured of the hand and projector surface)), and wherein the detecting includes detecting the target object as the detection target, the distance from the reference surface to the representative point of the target object being a longest of the distances that satisfy the predetermined distance condition (Suzuki, [0065] the measuring unit measures the 3D coordinates of the manipulating object or objects, such as the hand and fingers (Detection of two or more target objects), coordinates may be detected for all finger tips (multiple reference points) [0083] the display unit displays vertical and lateral alert lines to verify the distance conditions for the hand are met, [0084] the hand must be a certain distance or more from the projector surface to be detected, if this distance is not met the system will generate an alert using alert lines, given that the distance can be measured for the hand or multiple finger tips, the distance condition would be able to be assessed for both the hand and the finger tips, further [0069] details that here is a minimum (predetermined) distance at which the hand may be captured). Regarding claim 6, the combination of Suzuki and Oshima teaches; The detecting method according to claim 2, wherein the representative point is a centroid of a portion of a candidate of the detection target that overlaps the reference surface in the captured image (Suzuki, [0097] the projection of the image is based on the position of the hand (target object) and the border of the area (target area, this is the area the projector is projecting the image onto), Figure 12 and 13 A-C, show the position of the center of the hand (representatives point) being used, [0065] the center of the back of the hand may be used when determining the 3D coordinates of the hand, given that the center of the back of the hand can be used for the hand coordinates, this would mean the center of the hand (reference point) could be used to determine the distance from the projector surface, where figure 13a shows the center being determined for the hand when the whole hand overlaps the surface in the image). Regarding claim 7, the combination of Suzuki and Oshima teaches; The detecting method according to claim 1, wherein the detection target is a hand of a person, and wherein the component is held by the hand (Suzuki, [0039] the system detects the position of a manipulating object (hand or fingers) and a manipulated object (such as a document being projected), [0040] the system determines the document has been touched, Figure 1 shows the user manipulating the document, which examiner is interpreting as the document being touch, moved or otherwise handled by the user). PNG media_image1.png 588 646 media_image1.png Greyscale (Suzuki, Figure 1) Regarding claim 8, the combination of Suzuki and Oshima teaches; The detecting method according to claim 2, wherein the detection target is a hand of a person, and wherein the captured image is an image of an imaging region including the hand of the person who is located on a side of the reference surface of the component (Suzuki, [0057] an image is captured of between the hand (target object/detection target) and the image projected by the projector (reference surface), Figure 1 shows the positional relationship of the hand and the surface, where the hand is located on a side of the surface). Regarding claim 9, the combination of Suzuki and Oshima teaches; The detecting method according to claim 1, wherein the reference surface of the component is an image display surface on which an image is displayed (Suzuki, Figure 1, shows the surface as being a projection surface, [0005] projector displays and image on a surface). Regarding claim 10, the combination of Suzuki and Oshima teaches; The detecting method according to claim 1, further comprising: extracting a planar rectangular region with a constant depth or a continuously changing depth as the reference surface based on the depth information (Suzuki, [0039] the projection surface is a predetermined surface, [0043] this can be a table surface, which is a flat surface of consistent depth). Regarding claim 11, the combination of Suzuki and Oshima teaches; The detecting method according to claim 1, further comprising: acquiring a captured image of the reference surface of the component and the target object (Suzuki, [0057] an image is captured of the hand (target object) and the projection surface (reference surface)); and identifying the reference surface based on a position of a sign in the captured image, the sign being at a predetermined position of the component (Suzuki, [0071] the projection surface is detected/displayed using a 3D border around the area which the coordinates and positions are known, analogous to the applicant’s definition of a sign being something placed on all 4 corners to detect the surface defined in [0036] of applicant’s specification). Regarding claim 12, the combination of Suzuki and Oshima teaches; The detecting method according to claim 1, further comprising: acquiring a captured image of the reference surface and the target object (Suzuki, [0057] an image is captured of the hand (target object) and the projection surface (reference surface)); and identifying the reference surface based on a position of a sign in the captured image, the sign being at a predetermined position in an image displayed on the reference surface (Suzuki, [0071] the projection surface is detected/displayed using a 3D border around the area which the coordinates and positions are known, analogous to the applicant’s definition of a sign being something placed on all 4 corners to detect the surface defined in [0036] of applicant’s specification). Regarding claim 13, the combination of Suzuki and Oshima teaches; A detecting device comprising at least one processor configure to (Suzuki, [0005] the device includes a processor): acquires depth information related to depth of a reference surface of a component and a target object (Suzuki, [0057] a position relation between the hand (target object) and the image projected by the projector (reference surface) is determined); derive a distance from the reference surface to a representative point of the target object a direction within +/- 10 degrees of a normal to the reference surface based on the depth information (Oshima, [0098] the fingertip touch positions (target object) from the flat surface (reference surface) are derived and [0100] vectors of the position are derived using orthogonal projections where the fingertip contact on the surface is an orthogonal vector, meaning the positional relationship/distance is derived as a distance perpendicular to the reference surface. Given that the distance is perpendicular to the flat surface, therefore the distance relationship derived is within 10 degrees of the normal to the reference surface because it is normal to the reference surface); and detect the target object as a detection target, upon the distance satisfying a predetermined distance condition (Suzuki, [0084] the hand must be a certain distance from the projector surface to be detected, if this distance is not met the system will generate an alert). and determine a gesture of the detection target (Suzuki, [0066] the gesture is determined based on the shape of the hand and fingers and their change in shape between gestures), if the detection target has been detected from a plurality of candidates, without determining a gesture of any remainder of the plurality of candidates (Suzuki, [0094] the gesture is only determined if it’s one of the predetermined gestures stored, if not, the gesture is not determined). The combination of Suzuki and Oshima would have obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. Suzuki teaches a method of determining hand gestures and distances of the hand in relation to a flat surface, it does not teach that the target object or the hand is orthogonal to a flat surface. Oshima, teaches this limitation, and this feature would be advantageous to add to the system of Suzuki because determining hand and finger contact points and distance in relation to a flat surface requires reference points, and defining the plants as being perpendicular to one another allows the system to more accurately assess the vertical distance between the hand and the flat surface. (Oshima, [0032]- [0033] and [0054]) Regarding claim 14, Suzuki teaches; A non-transitory computer-readable recording medium storing a program that causes at least one processor to (Suzuki, [0005] the device includes a processor): acquire depth information related to depth of a reference surface of a component and a target object (Suzuki, [0057] a position relation between the hand (target object) and the image projected by the projector (reference surface) is determined); derive a distance from the reference surface to a representative point of the target object a direction within +/- 10 degrees of a normal to the reference surface based on the depth information (Oshima, [0098] the fingertip touch positions (target object) from the flat surface (reference surface) are derived and [0100] vectors of the position are derived using orthogonal projections where the fingertip contact on the surface is an orthogonal vector, meaning the positional relationship/distance is derived as a distance perpendicular to the reference surface. Given that the distance is perpendicular to the flat surface, therefore the distance relationship derived is within 10 degrees of the normal to the reference surface because it is normal to the reference surface); and detect the target object as a detection target, upon the distance satisfying a predetermined distance condition (Suzuki, [0084] the hand must be a certain distance from the projector surface to be detected, if this distance is not met the system will generate an alert). and determine a gesture of the detection target (Suzuki, [0066] the gesture is determined based on the shape of the hand and fingers and their change in shape between gestures), if the detection target has been detected from a plurality of candidates, without determining a gesture of any remainder of the plurality of candidates (Suzuki, [0094] the gesture is only determined if it’s one of the predetermined gestures stored, if not, the gesture is not determined). The combination of Suzuki and Oshima would have obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. Suzuki teaches a method of determining hand gestures and distances of the hand in relation to a flat surface, it does not teach that the target object or the hand is orthogonal to a flat surface. Oshima, teaches this limitation, and this feature would be advantageous to add to the system of Suzuki because determining hand and finger contact points and distance in relation to a flat surface requires reference points, and defining the plants as being perpendicular to one another allows the system to more accurately assess the vertical distance between the hand and the flat surface. (Oshima, [0032]- [0033] and [0054]) Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. For a listing of analogous prior art as cited by the examiner please see the attached PTO-892 Notice of References Cited. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JORDAN M ELLIOTT whose telephone number is (703)756-5463. The examiner can normally be reached M-F 8AM-5PM ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571) 270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.M.E./Examiner, Art Unit 2666 /EMILY C TERRELL/Supervisory Patent Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

Aug 28, 2023
Application Filed
Aug 07, 2025
Non-Final Rejection — §101, §103, §112
Nov 12, 2025
Response Filed
Jan 30, 2026
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573117
METHOD AND DEVICE FOR DEEP LEARNING-BASED PATCHWISE RECONSTRUCTION FROM CLINICAL CT SCAN DATA
2y 5m to grant Granted Mar 10, 2026
Patent 12475998
SYSTEMS AND METHODS OF ADAPTIVELY GENERATING FACIAL DEVICE SELECTIONS BASED ON VISUALLY DETERMINED ANATOMICAL DIMENSION DATA
2y 5m to grant Granted Nov 18, 2025
Patent 12450918
AUTOMATIC LANE MARKING EXTRACTION AND CLASSIFICATION FROM LIDAR SCANS
2y 5m to grant Granted Oct 21, 2025
Patent 12437415
METHODS AND SYSTEMS FOR NON-DESTRUCTIVE EVALUATION OF STATOR INSULATION CONDITION
2y 5m to grant Granted Oct 07, 2025
Patent 12406358
METHODS AND SYSTEMS FOR AUTOMATED SATURATION BAND PLACEMENT
2y 5m to grant Granted Sep 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
45%
Grant Probability
31%
With Interview (-13.7%)
2y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 20 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month