Prosecution Insights
Last updated: April 19, 2026
Application No. 18/848,142

HIGH SPEED ACTUATOR-BASED ITEM REGISTRATION SYSTEM

Non-Final OA §101§103§112
Filed
Sep 18, 2024
Examiner
TRAN, ALYSE TRAMANH
Art Unit
3656
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Sony Semiconductor Solutions Corporation
OA Round
1 (Non-Final)
77%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
20 granted / 26 resolved
+24.9% vs TC avg
Strong +50% interview lift
Without
With
+50.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
25 currently pending
Career history
51
Total Applications
across all art units

Statute-Specific Performance

§101
14.4%
-25.6% vs TC avg
§103
52.8%
+12.8% vs TC avg
§102
22.4%
-17.6% vs TC avg
§112
10.4%
-29.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 26 resolved cases

Office Action

§101 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This communication is in response to Application No. 18/848142, filed on 18-SEP-2024. Claims 1-21 are currently pending and have been examined. Claims 1-21 have been rejected as follows. Claim Rejections - 35 USC § 112 Claim 1 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The term “be adaptive” in claim 1 is a relative term which renders the claim indefinite. The term “be adaptive” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Dependent claims 2-19 are also rejected based on dependence on claim 1. Claims 9 and 10 are also rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The term “being adaptive” in claim 9 and 10 is a relative term which renders the claim indefinite. The term “being adaptive” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Dependent claims 11-19 are also rejected based on dependence on claim 9 and 10. Claim 20 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The term “adapting” in claim 20 is a relative term which renders the claim indefinite. The term “adapting” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Dependent claim 21 is also rejected based on dependence on claim 20. Claim Rejections - 35 USC § 101 Claims 20 and 21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. 101 Analysis – Step 1 Claim 20 is directed to a method for adapting a vision-based task system (i.e., a process). Therefore, claims 20 is within at least one of the four statutory categories. Claim 21 does/do not fall within at least one of the four categories of patent eligible subject matter because is directed to a program comprising instructions for adapting a vision-based task system (i.e., software per se). Therefore claim 21 fails step one and is rejected for failure to claim statutory subject matter. the claimed invention is directed to non-statutory subject matter. 101 Analysis – Step 2A, Prong Regarding Prong I of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether they recite subject matter that falls within one of the follow groups of abstract ideas: a) mathematical concepts, b) certain methods of organizing human activity, and/or c) mental processes. Independent claim 20 includes limitations that recite an abstract idea (emphasized below) and will be used as a representative claim for the remainder of the 101 rejection. Claim 20 recites: A computer-implemented method for performing a vision-based registration task the method comprising acquiring output of an event-based vision sensor and adapting a vision-based registration task system to inference quality derived from the output of the event-based vision sensor and/or to a task output of the vision-based registration task performed by the vision-based registration task system The examiner submits that the foregoing bolded limitation(s) constitute a “mental process”, because under its broadest reasonable interpretation, the claim covers performance of the limitation in the human mind. For example, “adapting a vision-based registration task system …” and “and/or to a task output …” in the context of this claim encompasses a person assessing vision sensor data and determining adaptation for the system and/or output. Accordingly, the claim recites at least 2 abstract ideas. 101 Analysis – Step 2A, Prong II Regarding Prong II of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether the claim, as a whole, integrates the abstract idea into a practical application. As noted in the 2019 PEG, it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application.”. In the present case, the additional limitations beyond the above-noted abstract idea are as follows (where the underlined portions are the “additional limitations” while the bolded portions continue to represent the “abstract idea”): A computer-implemented method for performing a vision-based registration task the method comprising acquiring output of an event-based vision sensor and adapting a vision-based registration task system to inference quality derived from the output of the event-based vision sensor and/or to a task output of the vision-based registration task performed by the vision-based registration task system For the following reason(s), the examiner submits that the above identified additional limitations do not integrate the above-noted abstract idea into a practical application. The additional limitations “A computer-implemented method” are mere instructions to apply the exception of identified mental process on generic computer. MPEP 2106.05 (f) and the cases cited therein, including Alice Corp. v. CLS Bank, 573 U.S. 208, 221, 110 USPQ2d 1976, 1982-83 (2014) (quoting Mayo Collaborative Servs. V. Prometheus Labs., Inc., 566 U.S. 66, 72, 101 USPQ2d 1961, 1965), indicate that claims that amount to nothing more than an instruction to apply the abstract idea using a generic computer do not render an abstract idea eligible. Regarding the additional limitations of “…acquiring output of an event-based vision sensor…” the examiner submits that these limitations are insignificant extra-solution activities as they are broad enough to include the pre-solution activity gathering data. In particular, the “…acquiring output of an event-based vision sensor…” steps are recited at a high level of generality (i.e. as a general acquiring data), and amounts to mere data gathering which is a form of insignificant extra-solution activity. Thus, taken alone, the additional elements do not integrate the abstract idea into a practical application. 101 Analysis – Step 2B Regarding Step 2B of the 2019 PEG, representative independent claim 1 does not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application. As discussed above, the additional limitations of “…acquiring output of an event-based vision sensor…” the examiner submits that these limitations are insignificant extra-solution activities. Further, a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be re-evaluated in Step 2B to determine if they are more than what is well- understood, routine, conventional activity in the field. The additional limitations “…acquiring output of an event-based vision sensor…” are well-understood, routine, and conventional activities as is merely the collection of data. MPEP 2106.05(d)(II), and the cases cited therein, including Intellectual Ventures I, LLC v. Symantec Corp., 838 F.3d 1307, 1321 (Fed. Cir. 2016), TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610 (Fed. Cir. 2016), and OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363 (Fed. Cir. 2015), indicate that mere collection or receipt of data over a network is a well‐understood, routine, and conventional function when it is claimed in a merely generic manner. Hence, the claim is not patent eligible. Dependent claims 21 do not recite any further limitations that cause the claims to be patent eligible. Rather, the limitations of dependent claims are directed toward additional aspects of the judicial exception and/or well-understood, routine and conventional additional elements that do not integrate the judicial exception into a practical application as none of the dependent claims narrow the scope to not encompass performance of the limitations in the human mind. Therefore, claims 20 and 21 are ineligible under 35 USC §101. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-10, and 12-21 are rejected under 35 U.S.C. 103 as being unpatentable over Dean (US 2019/0025849 A1) in view Guack (US 20220067689 A1). Regarding claim 1, Dean teaches: A system comprising circuitry configured to perform a vision-based registration task (Figure 9; Paragraph [31]), the circuitry being coupled with an vision sensor (Figure 2; element 150) and being configured to be adaptive to inference quality derived from output (Paragraph [92], "At 756, controller 120 may adjust focus apparatus 170 prior to capturing a series of images 210 based on the distance sensed by depth sensor 176 and the DOF of lens 184 (controller 120 may adjust focus apparatus 170 less frequently when lens 184 has a deep DOF). ") of the vision sensor (element 150, 176; Paragraph [46], "Depth sensor 176 may be a range camera configured to produce a range image, or a time-of-flight camera which emits a light ray (e.g. an infrared light ray) and detects the reflection of the light ray, as is known in the art") and/or to be adaptive to consistency of a task output of the vision-based registration task (This limitation is being interpreted as the alternative with respect to “/or”, where the circuitry only is adaptive to inference quality derived from output of the vision sensor) While Dean teaches the limitations as stated above, it does not expressly disclose: an event-based sensor However, Guack teaches: event-based sensor (Paragraph [175], "For example, the sensors 414 correspond to one or more… event-based cameras") It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the robot with an imaging system that creates a combined image of product barcodes of Dean, to include the use of event-based cameras for imaging the shelf units, as taught by Guack. Such modification would have been obvious because such application would have been well within the level of skill of a person having ordinary skill in the art and would have yielded predictable results. The predictable results including: a robot with an event-based imaging system that creates a combined image of product barcodes. Regarding claim 2, Dean teaches: The system of claim 1, wherein the circuitry is configured to find the state in which the system performs the vision-based registration task optimally (element 800; Paragraph [72, 93], "only the image having the optimal exposure of the ten images is used to construct the combined image") Regarding claim 3, Dean teaches: The system of claim 1, wherein the circuitry is configured to determine a quality metric from the output of the event-based vision sensor (Paragraph [92], "Depth sensor 176 may produce an output indicating the distance between depth sensor 176 and the objects along path 200, which may be reflective of the distance between line scan camera 180 and the objects due to the placement and/or the calibration of depth sensor 176"), the quality metric being used in a feedback loop to control (Paragraph [92], "At 756, controller 120 may adjust focus apparatus 170 prior to capturing a series of images 210 based on the distance sensed by depth sensor 176 and the DOF of lens 184 (controller 120 may adjust focus apparatus 170 less frequently when lens 184 has a deep DOF)") an actuator (Paragraph [56], "A voice coil or a linear motor may be used to adjust the location and/or angle of any one of the mirrors") and thus affect the quality of what is being sensed by the event-based vision sensor (Paragraph [92], "Focus apparatus 170 may maintain a working distance between line scan camera 180 and the objects substantially constant to bring the objects in focus (i.e. to bring the shelves 110 in focus, as previously explained)") Regarding claim 4, Dean teaches: The system of claim 1, wherein the system itself or an object can be moved in its relative position to the system by an actuator (Paragraph [56], "A voice coil or a linear motor may be used to adjust the location and/or angle of any one of the mirrors") which gets control input based on the inference quality derived from output of the event-based vision sensor (Paragraph [92], "At 756, controller 120 may adjust focus apparatus 170 prior to capturing a series of images 210 based on the distance sensed by depth sensor 176 and the DOF of lens 184 (controller 120 may adjust focus apparatus 170 less frequently when lens 184 has a deep DOF). ") and/or the consistency of the task output (This limitation is being interpreted as the alternative with respect to “/or”, where the control input only is based on the inference quality derived from output of the vision sensor) Regarding claim 5, Dean teaches: wherein the actuator-based optical registration system (Paragraph [56], "A voice coil or a linear motor may be used to adjust the location and/or angle of any one of the mirrors"; Paragraph [92], "Focus apparatus 170 may maintain a working distance between line scan camera 180 and the objects substantially constant to bring the objects in focus (i.e. to bring the shelves 110 in focus, as previously explained)") While Dean and Guack teaches the limitations as stated above, it does not expressly disclose: comprises an EVS-based sensing subsystem which comprises the event-based vision sensor However, Guack teaches: The system of claim 1, wherein the actuator-based optical registration system comprises an EVS-based sensing subsystem which comprises the event-based vision sensor (Paragraph [175], "For example, the sensors 414 correspond to one or more… event-based cameras") It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the robot with an actuator imaging system that creates a combined image of product barcodes of Dean, to include the use of event-based cameras for imaging the shelf units, as taught by Guack. Such modification would have been obvious because such application would have been well within the level of skill of a person having ordinary skill in the art and would have yielded predictable results. The predictable results including: a robot with an event-based actuator imaging system that creates a combined image of product barcodes. Regarding claim 6, Dean teaches: The system of claim 1, wherein the system is moved by an actuator (Paragraph [56], "A voice coil or a linear motor may be used to adjust the location and/or angle of any one of the mirrors"), or an object is moved in its relative position to the system by an actuator (This limitation is being interpreted as the alternative with respect to “or”, where the system is only moved by an actuator), the actuator getting control input based on the inference quality derived from output of the event-based vision sensor (Paragraph [92], "Depth sensor 176 may produce an output indicating the distance between depth sensor 176 and the objects along path 200, which may be reflective of the distance between line scan camera 180 and the objects due to the placement and/or the calibration of depth sensor 176. At 756, controller 120 may adjust focus apparatus 170 prior to capturing a series of images 210 based on the distance sensed by depth sensor 176") and/or the consistency of the task output ([]) Regarding claim 7, Dean teaches: The system of claim 1… to reconstruct item tags (element 706) and/or navigational cues (This limitation is being interpreted as the alternative with respect to “/or”, where the system only reconstructs item tags) While Dean and Guack teach the limitations as stated above, it does not expressly disclose: an EVS-based sensing subsystem which comprises the event-based vision sensor and wherein the EVS-based sensor subsystem uses events measured by EVS sensor However, Guack teaches: comprising an EVS-based sensing subsystem which comprises the event-based vision sensor (Paragraph [175], "For example, the sensors 414 correspond to one or more… event-based cameras"), and wherein the EVS-based sensor subsystem uses events measured by EVS sensor (Paragraph [45]) It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the robot with an imaging system that creates a combined image of product barcodes of Dean, to include the use of event-based cameras for imaging the shelf units, as taught by Guack. Such modification would have been obvious because such application would have been well within the level of skill of a person having ordinary skill in the art and would have yielded predictable results. The predictable results including: a robot with an event-based imaging system that creates a combined image of product barcodes. Regarding claim 8, Dean teaches: The system of claim 1, comprising an EVS-based sensing subsystem which comprises the event-based vision sensor and wherein the EVS-based sensor subsystem is configured to output an inference quality metric (Paragraph [92], "Depth sensor 176 may produce an output indicating the distance between depth sensor 176 and the objects along path 200, which may be reflective of the distance between line scan camera 180 and the objects due to the placement and/or the calibration of depth sensor 176") Regarding claim 9, Dean teaches: The system of claim 1, comprising a conveyor belt which is coupled to an EVS-based sensing subsystem (element 128; Paragraph [26], "As illustrated in FIG. 2, robot 100 includes a conveyance apparatus 128 for moving robot 100 along a path 200 (depicted in FIG. 5A). Robot 100 captures, using imaging system 150 on robot 100"), the movement of the conveyor belt being adaptive to inference quality (Figure 7B; element 752) derived from output of the event-based vision sensor (Paragraph [90]) and/or to consistency of a task output (This limitation is being interpreted as the alternative with respect to “/or”, where the movement of the conveyor belt is only adaptive to inference quality) Regarding claim 10, Dean teaches: The system of claim 1, wherein an EVS-based sensing subsystem is implemented onboard a scanning drone or robot (Figure 1; element 100), the movement of the scanning drone or robot being adaptive to inference quality derived from output of the event-based vision sensor (Figure 7B; element 752; Paragraph [29, 90]) and/or to consistency of a task output (This limitation is being interpreted as the alternative with respect to “/or”, where the movement of the conveyor belt is only adaptive to inference quality) Regarding claim 12, Dean teaches: The system of claim 1, wherein the circuitry is configured to transform an event representation of events obtained from the event-based vision sensor (element 706) Regarding claim 13, Dean teaches: The system of claim 1, wherein the circuitry is configured to perform image reconstruction (element 706) based on an event stream obtained by the event-based vision sensor (element 706) Regarding claim 14, Dean teaches: The system of claim 1, wherein the circuitry is configured to perform a specified task on reconstructed images to obtain a task output (element 710) Regarding claim 15, Dean teaches: The system of claim 1, wherein the circuitry is configured to perform inference of image quality is performed based on reconstructed images (element 710) to obtain an image quality metric (Paragraph [104], "Accordingly, at 804, controller 120 may detect the shelf tag barcodes in the combined image by analyzing the combined image. For example, controller 120 may search for a specific pattern that is commonly used by shelf tag barcodes. Each detected shelf tag barcode be added as meta-data to the image, and may be further processed for correction therewith") Regarding claim 16, Dean teaches: The system of claim 1, wherein the circuitry is configured to perform a consistency check on the task output to obtain a consistence metric (Paragraph [106], "Controller 120 may then compare the detected depth to a predefined expected depth. If the detected depth is less that the expected depth by a predefined margin, then the product may be out-of-stock, or low-in-stock") Regarding claim 17, Dean teaches: The system of claim 1, wherein the circuitry is configured to perform a control loop based on an image quality metric (Figure 7C; element 764 -> No ->752; Paragraph [102], "Accordingly, each location along path 200 is based on the position of robot 100 at the time at which controller 120 initiates capture of a new image or a new sequence of images"; Paragraph [35, 102], "If method 750 continues operation at block 752, controller 120 may cause robot 100 to convey to a second location x2 that is adjacent to first location x1 along path 200 and to capture second image 212") and/or a consistence metric (This limitation is being interpreted as the alternative with respect to “/or”, where the control loop is only based on an image quality metric) Regarding claim 18, Dean teaches: he system of claim 1, wherein the circuitry is configured to deduce the possibility of the current frame containing an image tag (Paragraph [98]), and/or the circuitry is configured to localize a tag (Paragraph [104], "For example, controller 120 may search for a specific pattern that is commonly used by shelf tag barcodes") Regarding claim 19, Dean teaches: An EVS-based sensing subsystem comprising circuitry configured to perform a vision- based registration task (Figure 9; Paragraph [31]), the subsystem comprising an vision sensor (Figure 2; element 150) and circuitry configured to output an inference quality metric to a processor of a vision-based registration task system (Paragraph [92], "At 756, controller 120 may adjust focus apparatus 170 prior to capturing a series of images 210 based on the distance sensed by depth sensor 176 and the DOF of lens 184 (controller 120 may adjust focus apparatus 170 less frequently when lens 184 has a deep DOF). ") While Dean and Guack teach the limitations as stated above, it does not expressly disclose: an event-based sensor However, Guack teaches: event-based sensor (Paragraph [175], "For example, the sensors 414 correspond to one or more… event-based cameras") It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the robot with an imaging system that creates a combined image of product barcodes of Dean, to include the use of event-based cameras for imaging the shelf units, as taught by Guack. Such modification would have been obvious because such application would have been well within the level of skill of a person having ordinary skill in the art and would have yielded predictable results. The predictable results including: a robot with an event-based imaging system that creates a combined image of product barcodes. Regarding claim 20, Dean teaches: A computer-implemented method (Paragraph [31, 86], "computing device") for performing a vision-based registration task (Figure 9; Paragraph [31]), the method comprising acquiring output of an vision sensor (Paragraph [46], "Depth sensor 176 may be a range camera configured to produce a range image, or a time-of-flight camera which emits a light ray (e.g. an infrared light ray) and detects the reflection of the light ray, as is known in the art") and adapting a vision-based registration task system to inference quality (Paragraph [92], "At 756, controller 120 may adjust focus apparatus 170 prior to capturing a series of images 210 based on the distance sensed by depth sensor 176 and the DOF of lens 184 (controller 120 may adjust focus apparatus 170 less frequently when lens 184 has a deep DOF). ") derived from the output of the event-based vision sensor (element 150, 176) and/or to a task output of the vision-based registration task performed by the vision-based registration task system (This limitation is being interpreted as the alternative with respect to “/or”, where the method only involves adapting a vision-based registration task system to inference quality) While Dean and Guack teach the limitations as stated above, it does not expressly disclose: an event-based sensor However, Guack teaches: event-based sensor (Paragraph [175], "For example, the sensors 414 correspond to one or more… event-based cameras") It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the robot with an imaging system that creates a combined image of product barcodes of Dean, to include the use of event-based cameras for imaging the shelf units, as taught by Guack. Such modification would have been obvious because such application would have been well within the level of skill of a person having ordinary skill in the art and would have yielded predictable results. The predictable results including: a robot with an event-based imaging system that creates a combined image of product barcodes. Regarding claim 21, Dean teaches: A program comprising instructions, the instructions being configured to (Figure 124, 138; Paragraph [31]), when operated by a processor, perform the method of claim 20 (Figure 120) Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Dean (US 2019/0025849 A1) in view Guack (US 20220067689 A1) and in further view of Cheng (US 20200097892 A1) Regarding claim 11, Dean teaches: The system of claim 1, wherein the item tags are selected from barcodes (element 1050-1054) … and wherein the vision-based registration task comprises a process of tag detection (Paragraph [104], "For example, controller 120 may search for a specific pattern that is commonly used by shelf tag barcodes") While Dean and Guack teach the limitations as stated above, it does not expressly disclose: QR codes April tags However, Cheng teaches: QR codes and April tags (Paragraph [8]) It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify a robot with an event-based actuator imaging system that creates a combined image of product barcodes of Dean and Guack, to include the QR codes and April tags for product identification, as taught by Cheng. Such modification would have been obvious because such application would have been well within the level of skill of a person having ordinary skill in the art and would have yielded predictable results. The predictable results including: a robot with an event-based actuator imaging system that creates a combined identification image of product, such as barcodes, QR codes, and April tags. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALYSE TRAMANH TRAN whose telephone number is (703)756-5879. The examiner can normally be reached M-F 8:30am-5pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Khoi Tran can be reached at 571-272-6919. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /A.T.T./Examiner, Art Unit 3656 /KHOI H TRAN/Supervisory Patent Examiner, Art Unit 3656
Read full office action

Prosecution Timeline

Sep 18, 2024
Application Filed
Mar 06, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12569994
ROBOT APPARATUS
2y 5m to grant Granted Mar 10, 2026
Patent 12566071
METHOD OF ROUTE PLANNING AND ELECTRONIC DEVICE USING THE SAME
2y 5m to grant Granted Mar 03, 2026
Patent 12544826
BINDING DEVICE, BINDING SYSTEM, METHOD FOR CONTROLLING BINDING DEVICE, AND COMPUTER READABLE STORAGE MEDIUM STORING PROGRAM
2y 5m to grant Granted Feb 10, 2026
Patent 12539613
DETECTION AND MITIGATION OF PREDICTED COLLISIONS OF OBJECTS WITH USER CONTROL SYSTEM
2y 5m to grant Granted Feb 03, 2026
Patent 12515321
Method for Generating a Training Dataset for Training an Industrial Robot
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
77%
Grant Probability
99%
With Interview (+50.0%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 26 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month