Prosecution Insights
Last updated: April 19, 2026
Application No. 18/657,702

OBJECT DETECTION METHOD, ELECTRONIC APPARATUS AND GESTURE DETECTION SYSTEM

Non-Final OA §103§112
Filed
May 07, 2024
Examiner
CHANG, DANIEL CHEOLJIN
Art Unit
2669
Tech Center
2600 — Communications
Assignee
Aspeed Technology Inc.
OA Round
1 (Non-Final)
89%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 89% — above average
89%
Career Allow Rate
117 granted / 132 resolved
+26.6% vs TC avg
Moderate +12% lift
Without
With
+11.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
25 currently pending
Career history
157
Total Applications
across all art units

Statute-Specific Performance

§101
8.1%
-31.9% vs TC avg
§103
53.4%
+13.4% vs TC avg
§102
14.1%
-25.9% vs TC avg
§112
20.7%
-19.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 132 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Notice to Applicants This communication is in response to the Application filed on 05/07/2024. Claims 1-19 are pending. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: an object detection module to detect an original image in claim 1, 10 and 19. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 7 and 16 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 7 recites the limitations of “a width range in a horizontal direction” (line 4) and “the width range of the top-of-head position” (line 8-9). It is unclear whether “the width range” refers to the previously obtained width range or to a different width range associated with the top-of-head position. The antecedent basis and scope of “the width range” are unclear. Also, the claim recites “setting the valid determination range based on an upper area” (line 8). However, the term “upper area” is unclear because the claim does not define what constitutes the “upper” portion or the boundaries of the recited area. It is unclear whether “upper area” refers to a region relative to an image, a body, or a coordinate system, and the claim does not specify how the limits of this area are determined. With respect to claim 16, arguments analogous to those presented for claim 7, are applicable. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 1, 8, 9, 10, 17, 18 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Setiawan et al. (U.S. Publication No. 2012/0162409) (hereafter, "Setiawan") in view of WANG et al. (U.S. Publication No. 2023/0027040) (hereafter, "WANG"). Regarding claim 1, Setiawan teaches an object detection method, using a processor to implement following steps, comprising ([0033] The system control unit 200 includes a sub-unit which achieves functions of a position detection unit 201 and a gesture recognition unit 202): executing an object detection module to detect an original image ([0028] An input device 100 according to the present embodiment is a device capable of detecting the position of a user's body part from a moving image captured from the user), and obtaining a first position information, a second position information, and a third position information related to a same human body object from the original image through the object detection module, wherein the first position information corresponds to a head area, the second position information corresponds to a body area, and the third position information corresponds to a hand area ([0034] The position detection unit 201 detects positions of user's body parts such as the hand, face and trunk from a moving image obtained from the imaging unit 101); … obtaining a hand position in the original image based on the third position information ([0039] The position of the user's hand 121 is represented, for example, by a coordinate formed by a lateral position 302 at a center point 320 of a shape region of the detected hand and a longitudinal position 303 at the center point 320). Setiawan does not expressly teach setting a valid determination range based on at least one of the first position information and the second position information … in response to the hand position being within the valid determination range, executing a gesture recognition module; and in response to the hand position not being within the valid determination range, not executing the gesture recognition module. However, WANG teaches setting a valid determination range based on at least one of the first position information and the second position information ([0070] detecting whether the first image includes a human body; segmenting the human body to obtain a plurality of segmented regions when it is detected that the first image includes the human body, and detecting whether the segmented regions include an arm region) … in response to the hand position being within the valid determination range, executing a gesture recognition module ([0070] detecting whether the segmented regions include an arm region; detecting whether the arm region includes a hand region when it is detected that the segmented regions include an arm region; performing gesture recognition on the hand region when it is detected that the arm region includes a hand region; returning a result that the first target gesture is recognized from the first image when a gesture in the hand region is recognized as the first target gesture); and in response to the hand position not being within the valid determination range, not executing the gesture recognition module ([0070] returning a result that the first target gesture is not recognized from the first image when it is detected that the first image does not include a human body, or that the segmented regions do not include an arm region, or that the arm region does not include a hand region, or that the gesture in the hand region is not the first target gesture). It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the device and method of Setiawan to incorporate the step/system of setting multiple segmented regions to check if a specific part is present, upon detecting a human body in the initial image, performing gesture recognition when the segmented regions include a hand region and skipping gesture recognition when the segmented regions do not include a hand region taught by WANG. The suggestion/motivation for doing so would have been to improve the success rate of gesture detection ([0070] a success rate of gesture detection may be improved by performing human body detection, human body segmentation, arm region detection, and hand region detection on the first image in sequence, and performing gesture recognition in the hand region; [0110] by sequentially detecting the human body, the arm region, and the hand region in the first image, a situation that it is difficult to perform detection since the hand region occupies a small area in a picture may be avoided, thereby improving a success rate of gesture detection). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predicted results. Therefore, it would have been obvious to combine Setiawan and WANG to obtain the invention as specified in claim 1. Regarding claim 8, the combination of Setiawan and WANG teaches all the limitations of claim 1 above. Setiawan teaches obtaining a gesture recognition result ([0083] If, when a gesture is performed, the position of the hand is in the first operation region (Step 1205: first operation region) and the determination result thereof is consistent with a displayed GUI (Step 1206: Yes) … if, when a gesture is performed, the position of the hand is in the second operation region (Step 1205: second operation region) and the determination result thereof is consistent with a displayed GUI (Step 1207: Yes)); and executing a corresponding operation based on the gesture recognition result ([0083] the position of the hand is in the first operation region (Step 1205: first operation region) and the determination result thereof is consistent with a displayed GUI (Step 1206: Yes), the input device 100 executes a first user operation (Step: 1208) ... the position of the hand is in the second operation region (Step 1205: second operation region) and the determination result thereof is consistent with a displayed GUI (Step 1207: Yes), the input device 100 executes a second user operation (Step 1209)). Setiawan does not expressly teach wherein in response to the hand position being within the valid determination range, comprising: executing the gesture recognition module and. However, WANG teaches wherein in response to the hand position being within the valid determination range, comprising: executing the gesture recognition module and ([0070] detecting whether the segmented regions include an arm region; detecting whether the arm region includes a hand region when it is detected that the segmented regions include an arm region; performing gesture recognition on the hand region when it is detected that the arm region includes a hand region; returning a result that the first target gesture is recognized from the first image when a gesture in the hand region is recognized as the first target gesture). It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the device and method of Setiawan to incorporate the step/system of performing gesture recognition when the segmented regions include a hand region taught by WANG. Motivation for this combination has been stated in claim 1. Regarding claim 9, the combination of Setiawan and WANG teaches all the limitations of claim 8 above. WANG teaches wherein the operation comprises at least one of controlling an action of a physical apparatus and controlling an adjustment of a parameter setting of an electronic apparatus having the processor ([0138] after the gesture control function is turned on, the control method of the present embodiment may further include: opening a first-level functional interface on a display interface, and controlling the first-level functional interface according to a gesture of the control hand. After the gesture control function is turned on, the control hand has the right to control the display interface; [0139] controlling a display size adjustment operation of the second-level functional interface according to a gesture change of the control hand; controlling a page turning operation in the second-level functional interface according to a gesture of the control hand and a moving direction of the gesture; and controlling an operation of returning to the first-level functional interface from the second-level functional interface according to a gesture of the control hand and a duration of the gesture). With respect to claim 10, arguments analogous to those presented for claim 1, are applicable. With respect to claim 17, arguments analogous to those presented for claim 8, are applicable. With respect to claim 18, arguments analogous to those presented for claim 9, are applicable. With respect to claim 19, arguments analogous to those presented for claim 1, are applicable. Claim 2 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Setiawan et al. (U.S. Publication No. 2012/0162409) (hereafter, "Setiawan") in view of WANG et al. (U.S. Publication No. 2023/0027040) (hereafter, "WANG") and further in view of MURATA et al. (U.S. Publication No. 2013/0016910) (hereafter, "MURATA"). Regarding claim 2, the combination of Setiawan and WANG teaches all the limitations of claim 1 above. WANG teaches wherein setting the valid determination range based on at least one of the first position information and the second position information comprises ([0070] detecting whether the first image includes a human body; segmenting the human body to obtain a plurality of segmented regions when it is detected that the first image includes the human body, and detecting whether the segmented regions include an arm region). WANG does not expressly teach wherein in response to the hand position being within the valid determination range, comprising: executing the gesture recognition module and. However, MURATA teaches calculating a face width and a face area based on the first position information corresponding to the head area (FIG. 12; [0144] the position and the range of the target region is expressed by coordinates (x, y) with respect to the upper left corner, the height h, and the width w; [0143] In the case a target object is the face of a person, the region extraction unit 103 detects a target region (a face region in this case) by a method as shown in FIG. 12); setting a threshold based on the face width (FIG. 12; [0144] in the case the shape of a target region is circular, the position and the range of the target region is expressed by centre coordinates (x, y) and the radius r; [0150] the width and height of the remaining target regions (the radius in the case the target regions are circular)); and setting the valid determination range within a circular range with a center point of the face area as a center and the threshold as a radius ([0143] In the case a target object is the face of a person, the region extraction unit 103 detects a target region (a face region in this case) by a method as shown in FIG. 12; [0144] in the case the shape of a target region is circular, the position and the range of the target region is expressed by centre coordinates (x, y) and the radius r; [0104] a face region of a person is shown by a hatched circle. In this case, the position of the face region is expressed by centre coordinates of the circle. Also, the range of the face region is expressed by the radius of the circle). It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the device and method of WANG to incorporate the step/system of calculating a width of a face region and a face area based on the face position, obtaining the radius of the face region based on the width and determining a range of the face region is expressed by the radius taught by MURATA. The suggestion/motivation for doing so would have been to improve the efficiency for correction of a face region ([0178] Swift transition to a corresponding reproduction scene, at the time of discovery of an erroneous input, is thereby enabled, allowing more efficient correction of a face region; [0108] By applying these technologies, highly accurate video timeline metadata can be provided. Also, various applications using the video timeline metadata are realized). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predicted results. Therefore, it would have been obvious to combine Setiawan and WANG with MURATA to obtain the invention as specified in claim 2. With respect to claim 11, arguments analogous to those presented for claim 2, are applicable. Claim 4, 7, 13 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Setiawan et al. (U.S. Publication No. 2012/0162409) (hereafter, "Setiawan") in view of WANG et al. (U.S. Publication No. 2023/0027040) (hereafter, "WANG") and further in view of HEO et al. (U.S. Publication No. 2021/0191526) (hereafter, "HEO"). Regarding claim 4, the combination of Setiawan and WANG teaches all the limitations of claim 1 above. WANG teaches wherein setting the valid determination range based on at least one of the first position information and the second position information comprises ([0070] detecting whether the first image includes a human body; segmenting the human body to obtain a plurality of segmented regions when it is detected that the first image includes the human body, and detecting whether the segmented regions include an arm region). WANG does not expressly teach obtaining a body length range in a vertical direction based on the second position information corresponding to the body area; and setting the valid determination range according to a preset ratio in the body length range to determine whether the hand position in the vertical direction is within the valid determination range. However, HEO teaches obtaining a body length range in a vertical direction based on the second position information corresponding to the body area ([0071] The processor may check the skeleton of the user to recognize the movement of the joint; [0084] The electronic device 101 may check a user's location using at least one sensor ... The R1 and R2 may be values previously stored in the electronic device 101, or may be determined based on the user's characteristics (e.g., the user's height, the length of the user's body structure (e.g., arm), etc.) recognized by the electronic device 101; [0104] The user's physical characteristics may be directly input by the user to the electronic device or may be determined by the processor based on recognized information. The processor may configure, for example, the location and size of the gesture area. For example, the processor may configure the gesture area as the whole body 911, as an upper body 921 or a lower body 923 only); and setting the valid determination range according to a preset ratio in the body length range ([0104] The processor may configure, for example, the location and size of the gesture area ... the location and size of the gesture recognition area may be configured differently; FIG. 10; [0106] The electronic device 1010 may determine a gesture area 1040 based on the location and/or body characteristics of the user 1020; [0104] the processor may recognize user's physical characteristics (e.g., height, length of a body structure)) to determine whether the hand position in the vertical direction is within the valid determination range (Fig. 12; [0115] the processor may determine the entire upper body 1221 of the user as an area in which a gesture is detected. The processor may determine the upper portion of the user's waist as the area in which the gesture is detected by using the user's waist as a central axis). It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the device and method of WANG to incorporate the step/system of obtaining a height based on the body location and setting the gesture recognition area based on the height and location to determine whether the user's waist is within the recognition area taught by HEO. The suggestion/motivation for doing so would have been to improve the accuracy of detecting gesture ([0121] In order to prevent the plurality of gestures from being repeatedly recognized, the processor may recognize only a gesture in a specific direction for each gesture detection area as a valid gesture). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predicted results. Therefore, it would have been obvious to combine Setiawan and WANG with HEO to obtain the invention as specified in claim 4. Regarding claim 7, the combination of Setiawan and WANG teaches all the limitations of claim 1 above. WANG teaches wherein setting the valid determination range based on at least one of the first position information and the second position information comprises ([0070] detecting whether the first image includes a human body; segmenting the human body to obtain a plurality of segmented regions when it is detected that the first image includes the human body, and detecting whether the segmented regions include an arm region). Setiawan teaches obtaining a width range in a horizontal direction based on the second position information corresponding to the body area ([0111] Positions of body parts such as center points of the face and trunk detected by the input device 100 may be used as a reference ... the input device may be configured in such a manner as to determine as the gesture of circling the hand when the user's hand is moving around the center point of the trunk, when the circling radius of the user's hand is within a predetermined range calculated on the basis of the lateral width of the trunk; [0034] The position detection unit 201 detects positions of user's body parts such as the hand, face and trunk from a moving image obtained from the imaging unit 101); calculating a top-of-head position based on the first position information corresponding to the head area; and ([0034] The position detection unit 201 detects positions of user's body parts such as the hand, face and trunk from a moving image obtained from the imaging unit 101; [0039] The position of the user's face 122 is represented, for example, by a coordinate formed by a lateral position 300 at a center point 310 of the detected face and a longitudinal position 301 at the center point 310). The combination of Setiawan and WANG does not expressly teach setting the valid determination range based on an upper area and the width range of the top-of-head position. However, HEO teaches setting the valid determination range based on an upper area and the width range of the top-of-head position ([0106] The electronic device 1010 may determine a gesture area 1040 based on the location and/or body characteristics of the user 1020; [0104] the processor may configure the gesture area as the whole body 911, as an upper body 921 or a lower body 923 only, as a head 931; [0113] the area for detecting the gesture may be determined in consideration of the user's physical characteristics; [0115] the processor may determine the entire upper body 1221 of the user as an area in which a gesture is detected. The processor may determine the upper portion of the user's waist as the area in which the gesture is detected by using the user's waist). It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the device and method of WANG to incorporate the step/system of and setting the gesture recognition area based on the upper body which including the head is within the recognition area taught by HEO. Motivation for this combination has been stated in claim 4. With respect to claim 13, arguments analogous to those presented for claim 4, are applicable. With respect to claim 16, arguments analogous to those presented for claim 7, are applicable. Allowable Subject Matter Claim 3, 5, 6, 12, 14 and 15 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL C. CHANG whose telephone number is (571)270-1277. The examiner can normally be reached Monday-Thursday and Alternate Fridays 8:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chan S. Park can be reached at (571) 272-7409. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DANIEL C CHANG/Examiner, Art Unit 2669 /CHAN S PARK/Supervisory Patent Examiner, Art Unit 2669
Read full office action

Prosecution Timeline

May 07, 2024
Application Filed
Mar 22, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592097
REAL-TIME, FINE-RESOLUTION HUMAN INTRA-GAIT PATTERN RECOGNITION BASED ON DEEP LEARNING MODELS
2y 5m to grant Granted Mar 31, 2026
Patent 12579672
STEREO VISION-BASED HEIGHT CLEARANCE DETECTION
2y 5m to grant Granted Mar 17, 2026
Patent 12573047
Control Method, Device, Equipment and Storage Medium for Interactive Reproduction of Target Object
2y 5m to grant Granted Mar 10, 2026
Patent 12548296
Spatially Preserving Flattening in Deep Learning Neural Networks
2y 5m to grant Granted Feb 10, 2026
Patent 12541868
Image Registration Method and Apparatus, Electronic Apparatus, and Storage Medium
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
89%
Grant Probability
99%
With Interview (+11.7%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 132 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month