Prosecution Insights
Last updated: April 19, 2026
Application No. 18/847,755

ROBOT SYSTEM, PROCESSING METHOD, AND RECORDING MEDIUM

Non-Final OA §103
Filed
Sep 17, 2024
Examiner
HOQUE, SHAHEDA SHABNAM
Art Unit
3658
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
NEC Corporation
OA Round
1 (Non-Final)
43%
Grant Probability
Moderate
1-2
OA Rounds
3y 1m
To Grant
81%
With Interview

Examiner Intelligence

Grants 43% of resolved cases
43%
Career Allow Rate
25 granted / 58 resolved
-8.9% vs TC avg
Strong +38% interview lift
Without
With
+37.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
38 currently pending
Career history
96
Total Applications
across all art units

Statute-Specific Performance

§101
10.5%
-29.5% vs TC avg
§103
61.8%
+21.8% vs TC avg
§102
16.9%
-23.1% vs TC avg
§112
10.2%
-29.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 58 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 09/17/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Objections Claim 14 is objected to because of the following informalities: “the robot system according to claim 1, wherein the processor is configured to perform first processing content in a case where the type of the physical object and performs second processing content different from the first processing content in a case where the type of the physical object has not been identified from an image obtained by capturing the target object after the first processing content is performed” should read “the robot system according to claim 1, wherein the processor is configured to performs second processing content different from the first processing content in a case where the type of the physical object has not been identified from an image obtained by capturing the target object after the first processing content is performed”. Claim 17 is objected to because of the following informalities: “control an operation of the robot arm so that the robot arm performs an operation on a target object based on a result of recognizing an image obtained from an camera capturing the target object” should read “control an operation of the robot arm so that the robot arm performs an operation on a target object based on a result of recognizing an image obtained from a camera capturing the target object”. Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-4, 15, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Yamada et al. (US2017282363A1) (Hereinafter Yamada) in view of Wang et al. (US 20190102873 A1) (Hereinafter Wang). Regarding Claim 1, Yamada teaches a robot system comprising: a robot arm configured to be able to grasp a target object including a packaging member with transparency and a physical object packaged by the packaging member (See at least Para [0026] “According to the present exemplary embodiment, the “target object” to be manipulated by the robot may be a semitransparent or transparent part that is difficult to recognize by a vision sensor. Examples include a toner container. In the present exemplary embodiment, such target objects may be put in a box in a bulk manner…”, Para [0027] “FIG. 1 is a diagram for describing an example of a configuration of the robot system for implementing gripping of a target object and recognition processing of the target object by the robot system according to the present exemplary embodiment…”); a memory configured to store instructions (See at least Para [0069] “… The program is loaded into a not-illustrated memory of the information processing apparatus 104. As a result, the CPU in the information processing apparatus 104 becomes ready to execute the program.”); and a processor configured to execute the instructions (See at least Para [0069] “… The program is loaded into a not-illustrated memory of the information processing apparatus 104. As a result, the CPU in the information processing apparatus 104 becomes ready to execute the program.”) to: identify a type of the physical object based on image processing on an image of the target object (See at least Para [0072] “In step S3, the gripping state recognition unit 203 performs three-dimensional shape measurement. The three-dimensional shape measurement can be performed, for example, by pattern matching with the target objects 106. More specifically, the gripping state recognition unit 203 refers to the information of the part database 204, and performs pattern matching to determine the part type of the target objects 106 based on the image information obtained by the imaging device 102…”); and Although Yamada teaches … a case where the type of the physical object has not been identified (See at least Para [0105] “In the first exemplary embodiment, the shape of the target object 106 is recognized in consideration of only the static operation of the gripping unit (fingers). If the target object 106 is wrapped in a plastic bag or other cushioning materials (packaging materials), the end effector 105 in contact with the target object 106 is therefore not always able to fully compress the packaging materials. In such a case, the shape of the contents, i.e., the target object 106 itself is not always able to be measured.”, Para [0081] “…If the end effector 105 reaches the target position and the gripping unit is not detected to be in contact with the target object 106, a new target position is set to perform the operation again.”, Para [0036] “In the present exemplary embodiment, a change of the end effector 105 when the end effector 105 contacts a target object 106 is measured. The information processing apparatus 104 recognizes states of the target object 106 based on the measured change of the end effector 105. The states to be recognized include the gripping state of the target object 106 by the end effector 105, and the shape, position, and orientation of the target object 106.”), he does not explicitly spell out make a change to an environment different from an environment in which the image of the target object has been captured in a case where the type of the physical object has not been identified. Wang teaches make a change to an environment different from an environment in which the image of the target object has been captured in a case where the type of the physical object has not been identified (See at least Para [0066] “In the example shown in FIG. 5, the system first captures an image of the to-be-identified object under a natural, unadjusted environment. However, this can lead to uncertainty in the captured images, because the natural environment can change. For example, the lighting or illumination conditions can be different for day and night, and the temperature conditions can also change due to change of seasons. Such an uncertainty means that a large number of training samples will be needed to train the machine-learning module. To simplify the training process, in some embodiments, the system may capture a first image when the to-be-identified object is exposed to a first set of environmental factors (which are different from the target environmental factors). FIG. 6 presents a flowchart illustrating an exemplary process for identifying an object, according to one embodiment. During operation, the system obtains a to-be-identified object (operation 602) and determines a set of relevant environmental factors (operation 604). Depending on the category of the to-be-identified object, the relevant environmental factors can include, but are not limited to: illumination conditions, temperature, humidity, air pressure, electrical or magnetic field distribution, etc. The system adjusts the relevant environmental factors to achieve a first set of environmental factors, to which the to-be-identified object is exposed (operation 606) and captures a first image of the to-be-identified object (operation 608). Subsequent to capturing the first image, the system adjusts the relevant environmental factors to achieve the target environmental factors (operation 610) and captures a second image of the to-be-identified object when it is exposed to the target environmental factors (operation 612)…”). Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to combine the robot control apparatus of Yamaha with the teachings of Wang and include the feature of making a change to an environment different from an environment in which the image of the target object has been captured in a case where the type of the physical object has not been identified, thereby providing adjustment to the environment in order to receive different images to accurately identify the object to be manipulated (See at least Para [0032] “… Therefore, an automated system can use these features to identify the to-be-identified object, obviating the need for manual labor or experience, thus significantly enhancing the accuracy and reliability of the object-identification process.”). Regarding Claim 2, modified Yamada teaches all the elements of claim 1. Yamaha further teaches the robot system according to claim 1, comprising a camera configured to be able to capture the image of the target object (See at least Para [0030] “An imaging device (vision sensor) 102 is an imaging unit. The imaging device 102 may include, for example, a camera, a light-detecting sensor, or a photodiode. The imaging device 102 obtains image information about the target objects 106 and the arrangement locations 107 . The information processing apparatus 104 processes the image information obtained by the imaging device 102 .”), Although Yamada teaches … a case where the type of the physical object has not been identified (See at least Para [0105] “In the first exemplary embodiment, the shape of the target object 106 is recognized in consideration of only the static operation of the gripping unit (fingers). If the target object 106 is wrapped in a plastic bag or other cushioning materials (packaging materials), the end effector 105 in contact with the target object 106 is therefore not always able to fully compress the packaging materials. In such a case, the shape of the contents, i.e., the target object 106 itself is not always able to be measured.”, Para [0081] “…If the end effector 105 reaches the target position and the gripping unit is not detected to be in contact with the target object 106, a new target position is set to perform the operation again.”, Para [0036] “In the present exemplary embodiment, a change of the end effector 105 when the end effector 105 contacts a target object 106 is measured. The information processing apparatus 104 recognizes states of the target object 106 based on the measured change of the end effector 105. The states to be recognized include the gripping state of the target object 106 by the end effector 105, and the shape, position, and orientation of the target object 106.”), he does not explicitly spell out wherein the processor is configured to control the camera to form an angle different from an angle of the camera at which the image of the target object has been captured in a case where the type of the physical object has not been identified. Wang teaches wherein the processor is configured to control the camera to form an angle different from an angle of the camera at which the image of the target object has been captured in a case where the type of the physical object has not been identified (See at least Para [0050] “Note that, regardless of its composition or form, the to-be-identified object will reflect light, and features of images formed by the reflected light may indicate the composition or form of the to-be-identified object. In some embodiments, in order to ensure the reliability of the features extracted from the captured images, the system needs to carefully control the illumination condition, which can include angle, wavelength, and intensity of the light. In further embodiments, controlling the illumination condition can involve controlling a light source, such as florescent or LED lamps, screens or displays, or torches. The light source may also include natural light sources, such as the sun, the moon, or fluorite materials. Adjusting the lighting angle may involve adjusting the relative positions between the light source and the to-be-identified object. The wavelength of the incident light can include infrared, visible light, and ultraviolet light. In some embodiments, one can also adjust the number of light sources in order to adjust the illumination condition. In most cases, the to-be-identified object is in an environment with an existing light source (e.g., the sun or lighting in the room) that can be hard to eliminate or filter; hence, to achieve a desired illumination condition, one may need to add one or more additional light sources. An additional light source can be different from the existing light source in at least one of: angle, wavelength, and intensity…”). Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to combine the robot control apparatus of Yamaha with the teachings of Wang and include the feature of making the processor being configured to control the camera to form an angle different from an angle of the camera at which the image of the target object has been captured in a case where the type of the physical object has not been identified, thereby accurately identify the object to be manipulated (See at least Para [0032] “… Therefore, an automated system can use these features to identify the to-be-identified object, obviating the need for manual labor or experience, thus significantly enhancing the accuracy and reliability of the object-identification process.”). Regarding Claim 3, modified Yamada teaches all the elements of claim 1. Yamaha further teaches the robot system according to claim 1, comprising an illuminator configured to illuminate the target object (See at least [0032] “A light source 103 may include, for example, a projector which projects visible light. Alternatively, the light source 103 may be configured to project infrared light by using a laser. The light (visible light or infrared laser light) from the light source 103 illuminates the target objects 106 and the arrangement locations 107 with uniform illumination light or patterned light…”), Although Yamada teaches … a case where the type of the physical object has not been identified (See at least Para [0105] “In the first exemplary embodiment, the shape of the target object 106 is recognized in consideration of only the static operation of the gripping unit (fingers). If the target object 106 is wrapped in a plastic bag or other cushioning materials (packaging materials), the end effector 105 in contact with the target object 106 is therefore not always able to fully compress the packaging materials. In such a case, the shape of the contents, i.e., the target object 106 itself is not always able to be measured.”, Para [0081] “…If the end effector 105 reaches the target position and the gripping unit is not detected to be in contact with the target object 106, a new target position is set to perform the operation again.”, Para [0036] “In the present exemplary embodiment, a change of the end effector 105 when the end effector 105 contacts a target object 106 is measured. The information processing apparatus 104 recognizes states of the target object 106 based on the measured change of the end effector 105. The states to be recognized include the gripping state of the target object 106 by the end effector 105, and the shape, position, and orientation of the target object 106.”), he does not explicitly spell out wherein the processor is configured to make a change to a state of light different from a state of light radiated to the target object in a state in which the image of the target object has been captured in a case where the type of the physical object has not been identified.. Wang teaches wherein the processor is configured to make a change to a state of light different from a state of light radiated to the target object in a state in which the image of the target object has been captured in a case where the type of the physical object has not been identified (See at least Para [0063] “Subsequent to capturing images of the to-be-identified object under a normal, un-adjusted condition, the system can adjust environmental factors to which the to-be-identified object is exposed (operation 508). In some embodiments, the system can monitor current environmental factors that are relevant to object identification and adjust these environmental factors to achieve target environmental factors. For example, if the target environmental factors include an illumination condition, achieving the target environmental factors may involve adjusting the number, locations, and angle of the light sources. A to-be-identified object may include multiple facets facing different directions, and incident light of different angles may be reflected by different facets, resulting in captured images of the to-be-identified object having different image features. Therefore, to ensure that incident light illuminates the area of interest on the to-be-identified object, the system may need to adjust the relative position between the light source and the to-be-identified object, which may involve either moving the light source or the to-be-identified object. More particularly, if the to-be-identified object is bulky and difficult to move, the system may adjust the location of the light source…”). Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to combine the robot control apparatus of Yamaha with the teachings of Wang and include the feature of making a change to a state of light different from a state of light radiated to the target object in a state in which the image of the target object has been captured in a case where the type of the physical object has not been identified, thereby accurately identify the object to be manipulated (See at least Para [0032] “… Therefore, an automated system can use these features to identify the to-be-identified object, obviating the need for manual labor or experience, thus significantly enhancing the accuracy and reliability of the object-identification process.”). Regarding Claim 4, modified Yamada teaches all the elements of claim 3. Although Yamada teaches … a case where the type of the physical object has not been identified (See at least Para [0105] “In the first exemplary embodiment, the shape of the target object 106 is recognized in consideration of only the static operation of the gripping unit (fingers). If the target object 106 is wrapped in a plastic bag or other cushioning materials (packaging materials), the end effector 105 in contact with the target object 106 is therefore not always able to fully compress the packaging materials. In such a case, the shape of the contents, i.e., the target object 106 itself is not always able to be measured.”, Para [0081] “…If the end effector 105 reaches the target position and the gripping unit is not detected to be in contact with the target object 106, a new target position is set to perform the operation again.”, Para [0036] “In the present exemplary embodiment, a change of the end effector 105 when the end effector 105 contacts a target object 106 is measured. The information processing apparatus 104 recognizes states of the target object 106 based on the measured change of the end effector 105. The states to be recognized include the gripping state of the target object 106 by the end effector 105, and the shape, position, and orientation of the target object 106.”), he does not explicitly spell out the robot system according to claim 3, wherein the processor is configured to control the illuminator to form an angle different from the angle of the light in the state in which the image of the target object has been captured in a case where the type of the physical object has not been identified. Wang teaches the robot system according to claim 3, wherein the processor is configured to control the illuminator to form an angle different from the angle of the light in the state in which the image of the target object has been captured in a case where the type of the physical object has not been identified (See at least Para [0063] “Subsequent to capturing images of the to-be-identified object under a normal, un-adjusted condition, the system can adjust environmental factors to which the to-be-identified object is exposed (operation 508). In some embodiments, the system can monitor current environmental factors that are relevant to object identification and adjust these environmental factors to achieve target environmental factors. For example, if the target environmental factors include an illumination condition, achieving the target environmental factors may involve adjusting the number, locations, and angle of the light sources. A to-be-identified object may include multiple facets facing different directions, and incident light of different angles may be reflected by different facets, resulting in captured images of the to-be-identified object having different image features. Therefore, to ensure that incident light illuminates the area of interest on the to-be-identified object, the system may need to adjust the relative position between the light source and the to-be-identified object, which may involve either moving the light source or the to-be-identified object. More particularly, if the to-be-identified object is bulky and difficult to move, the system may adjust the location of the light source…”). Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to combine the robot control apparatus of Yamaha with the teachings of Wang and include the feature of making the processor being configured to control the illuminator to form an angle different from the angle of the light in the state in which the image of the target object has been captured in a case where the type of the physical object has not been identified, thereby accurately identify the object to be manipulated (See at least Para [0032] “… Therefore, an automated system can use these features to identify the to-be-identified object, obviating the need for manual labor or experience, thus significantly enhancing the accuracy and reliability of the object-identification process.”). Regarding Claim 15, Yamada teaches a processing method to be performed by a robot system including a robot arm configured to be able to grasp a target object including a packaging member with transparency and a physical object packaged by the packaging member (See at least Para [0026] “According to the present exemplary embodiment, the “target object” to be manipulated by the robot may be a semitransparent or transparent part that is difficult to recognize by a vision sensor. Examples include a toner container. In the present exemplary embodiment, such target objects may be put in a box in a bulk manner…”, Para [0027] “FIG. 1 is a diagram for describing an example of a configuration of the robot system for implementing gripping of a target object and recognition processing of the target object by the robot system according to the present exemplary embodiment…”), the processing method comprising: driving the robot arm (See at least [0028] “A robot 101 is a robot including a robot arm. An end effector 105 for gripping and manipulating (for example, moving) a target object 106 is attached to the end portion of the robot arm. The robot 101 performs an action determined by an information processing apparatus 104 (robot control apparatus) to manipulate the target object 106 .”); identifying a type of the physical object based on image processing on an image of the target object (See at least Para [0072] “In step S3, the gripping state recognition unit 203 performs three-dimensional shape measurement. The three-dimensional shape measurement can be performed, for example, by pattern matching with the target objects 106. More specifically, the gripping state recognition unit 203 refers to the information of the part database 204, and performs pattern matching to determine the part type of the target objects 106 based on the image information obtained by the imaging device 102…”); and Although Yamada teaches … a case where the type of the physical object has not been identified (See at least Para [0105] “In the first exemplary embodiment, the shape of the target object 106 is recognized in consideration of only the static operation of the gripping unit (fingers). If the target object 106 is wrapped in a plastic bag or other cushioning materials (packaging materials), the end effector 105 in contact with the target object 106 is therefore not always able to fully compress the packaging materials. In such a case, the shape of the contents, i.e., the target object 106 itself is not always able to be measured.”, Para [0081] “…If the end effector 105 reaches the target position and the gripping unit is not detected to be in contact with the target object 106, a new target position is set to perform the operation again.”, Para [0036] “In the present exemplary embodiment, a change of the end effector 105 when the end effector 105 contacts a target object 106 is measured. The information processing apparatus 104 recognizes states of the target object 106 based on the measured change of the end effector 105. The states to be recognized include the gripping state of the target object 106 by the end effector 105, and the shape, position, and orientation of the target object 106.”), he does not explicitly spell out making a change to an environment different from an environment in which the image of the target object has been captured in a case where the type of the physical object has not been identified. Wang teaches making a change to an environment different from an environment in which the image of the target object has been captured in a case where the type of the physical object has not been identified (See at least Para [0066] “In the example shown in FIG. 5, the system first captures an image of the to-be-identified object under a natural, unadjusted environment. However, this can lead to uncertainty in the captured images, because the natural environment can change. For example, the lighting or illumination conditions can be different for day and night, and the temperature conditions can also change due to change of seasons. Such an uncertainty means that a large number of training samples will be needed to train the machine-learning module. To simplify the training process, in some embodiments, the system may capture a first image when the to-be-identified object is exposed to a first set of environmental factors (which are different from the target environmental factors). FIG. 6 presents a flowchart illustrating an exemplary process for identifying an object, according to one embodiment. During operation, the system obtains a to-be-identified object (operation 602) and determines a set of relevant environmental factors (operation 604). Depending on the category of the to-be-identified object, the relevant environmental factors can include, but are not limited to: illumination conditions, temperature, humidity, air pressure, electrical or magnetic field distribution, etc. The system adjusts the relevant environmental factors to achieve a first set of environmental factors, to which the to-be-identified object is exposed (operation 606) and captures a first image of the to-be-identified object (operation 608). Subsequent to capturing the first image, the system adjusts the relevant environmental factors to achieve the target environmental factors (operation 610) and captures a second image of the to-be-identified object when it is exposed to the target environmental factors (operation 612)…”). Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to combine the robot control apparatus of Yamaha with the teachings of Wang and include the feature of making a change to an environment different from an environment in which the image of the target object has been captured in a case where the type of the physical object has not been identified, thereby providing adjustment to the environment in order to receive different images to accurately identify the object to be manipulated (See at least Para [0032] “… Therefore, an automated system can use these features to identify the to-be-identified object, obviating the need for manual labor or experience, thus significantly enhancing the accuracy and reliability of the object-identification process.”). Regarding Claim 17, Yamaha teaches a robot system comprising: a robot arm (See at least [0028] “A robot 101 is a robot including a robot arm. An end effector 105 for gripping and manipulating (for example, moving) a target object 106 is attached to the end portion of the robot arm. The robot 101 performs an action determined by an information processing apparatus 104 (robot control apparatus) to manipulate the target object 106 .”); and a memory configured to store instructions (See at least Para [0069] “… The program is loaded into a not-illustrated memory of the information processing apparatus 104. As a result, the CPU in the information processing apparatus 104 becomes ready to execute the program.”); and a processor configured to execute the instructions (See at least Para [0069] “… The program is loaded into a not-illustrated memory of the information processing apparatus 104. As a result, the CPU in the information processing apparatus 104 becomes ready to execute the program.”) to: control an operation of the robot arm so that the robot arm performs an operation on a target object based on a result of recognizing an image obtained from an camera capturing the target object (See at least Para [0072] “In step S3, the gripping state recognition unit 203 performs three-dimensional shape measurement. The three-dimensional shape measurement can be performed, for example, by pattern matching with the target objects 106. More specifically, the gripping state recognition unit 203 refers to the information of the part database 204, and performs pattern matching to determine the part type of the target objects 106 based on the image information obtained by the imaging device 102…”), wherein the target object is a physical object packaged by a packaging member with transparency (See at least Para [0026] “According to the present exemplary embodiment, the “target object” to be manipulated by the robot may be a semitransparent or transparent part that is difficult to recognize by a vision sensor. Examples include a toner container. In the present exemplary embodiment, such target objects may be put in a box in a bulk manner…”), and Although Yamada teaches … a case where the type of the physical object has not been identified (See at least Para [0105] “In the first exemplary embodiment, the shape of the target object 106 is recognized in consideration of only the static operation of the gripping unit (fingers). If the target object 106 is wrapped in a plastic bag or other cushioning materials (packaging materials), the end effector 105 in contact with the target object 106 is therefore not always able to fully compress the packaging materials. In such a case, the shape of the contents, i.e., the target object 106 itself is not always able to be measured.”, Para [0081] “…If the end effector 105 reaches the target position and the gripping unit is not detected to be in contact with the target object 106, a new target position is set to perform the operation again.”, Para [0036] “In the present exemplary embodiment, a change of the end effector 105 when the end effector 105 contacts a target object 106 is measured. The information processing apparatus 104 recognizes states of the target object 106 based on the measured change of the end effector 105. The states to be recognized include the gripping state of the target object 106 by the end effector 105, and the shape, position, and orientation of the target object 106.”), he does not explicitly spell out wherein the processor is configured to control the robot arm so that an environment in which the camera captures the target object is changed in a case where the physical object has not been identified from the image. Wang teaches wherein the processor is configured to control the robot arm so that an environment in which the camera captures the target object is changed in a case where the physical object has not been identified from the image (See at least Para [0066] “In the example shown in FIG. 5, the system first captures an image of the to-be-identified object under a natural, unadjusted environment. However, this can lead to uncertainty in the captured images, because the natural environment can change. For example, the lighting or illumination conditions can be different for day and night, and the temperature conditions can also change due to change of seasons. Such an uncertainty means that a large number of training samples will be needed to train the machine-learning module. To simplify the training process, in some embodiments, the system may capture a first image when the to-be-identified object is exposed to a first set of environmental factors (which are different from the target environmental factors). FIG. 6 presents a flowchart illustrating an exemplary process for identifying an object, according to one embodiment. During operation, the system obtains a to-be-identified object (operation 602) and determines a set of relevant environmental factors (operation 604). Depending on the category of the to-be-identified object, the relevant environmental factors can include, but are not limited to: illumination conditions, temperature, humidity, air pressure, electrical or magnetic field distribution, etc. The system adjusts the relevant environmental factors to achieve a first set of environmental factors, to which the to-be-identified object is exposed (operation 606) and captures a first image of the to-be-identified object (operation 608). Subsequent to capturing the first image, the system adjusts the relevant environmental factors to achieve the target environmental factors (operation 610) and captures a second image of the to-be-identified object when it is exposed to the target environmental factors (operation 612)…”). Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to combine the robot control apparatus of Yamaha with the teachings of Wang and include the feature of making the processor being configured to control the robot arm so that an environment in which the camera captures the target object is changed in a case where the physical object has not been identified from the image, thereby providing adjustment to the environment in order to receive different images to accurately identify the object to be manipulated (See at least Para [0032] “… Therefore, an automated system can use these features to identify the to-be-identified object, obviating the need for manual labor or experience, thus significantly enhancing the accuracy and reliability of the object-identification process.”). Claim(s) 5 are rejected under 35 U.S.C. 103 as being unpatentable over Yamada et al. (US2017282363A1) (Hereinafter Yamada) in view of Wang et al. (US 20190102873 A1) (Hereinafter Wang), and further in view of MA et al. (US 20200033710 A1) (Hereinafter MA). Regarding Claim 5, modified Yamada teaches all the elements of claim 3. Although Yamada teaches … a case where the type of the physical object has not been identified (See at least Para [0105] “In the first exemplary embodiment, the shape of the target object 106 is recognized in consideration of only the static operation of the gripping unit (fingers). If the target object 106 is wrapped in a plastic bag or other cushioning materials (packaging materials), the end effector 105 in contact with the target object 106 is therefore not always able to fully compress the packaging materials. In such a case, the shape of the contents, i.e., the target object 106 itself is not always able to be measured.”, Para [0081] “…If the end effector 105 reaches the target position and the gripping unit is not detected to be in contact with the target object 106, a new target position is set to perform the operation again.”, Para [0036] “In the present exemplary embodiment, a change of the end effector 105 when the end effector 105 contacts a target object 106 is measured. The information processing apparatus 104 recognizes states of the target object 106 based on the measured change of the end effector 105. The states to be recognized include the gripping state of the target object 106 by the end effector 105, and the shape, position, and orientation of the target object 106.”), he does not explicitly spell out the robot system according to claim 3, wherein the processor is configured to cause a physical object, which changes a refractive index of light between the target object and the illuminator, to move in a case where the type of the physical object has not been identified. MA teaches the robot system according to claim 3, wherein the processor is configured to cause a physical object, which changes a refractive index of light between the target object and the illuminator, to move in a case where the type of the physical object has not been identified (See at least Para [0112] “… In this manner, the refractive indexes of the LC changes with the change of the light polarization throughout operation of the light projector (and thus the light distribution is based on the polarity of the light passing through the elements).”). Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to combine the robot control apparatus of Yamaha with the teachings of MA and include the feature of causing a physical object, which changes a refractive index of light between the target object and the illuminator, to move in a case where the type of the physical object has not been identified, thereby accurately identify the object to be manipulated. Claim(s) 6 are rejected under 35 U.S.C. 103 as being unpatentable over Yamada et al. (US2017282363A1) (Hereinafter Yamada) in view of Wang et al. (US 20190102873 A1) (Hereinafter Wang), MA et al. (US 20200033710 A1) (Hereinafter MA), and further in view of Kay (US 20090055024 A1). Regarding Claim 6, modified Yamada teaches all the elements of claim 3. Although Yamada teaches … a case where the type of the physical object has not been identified (See at least Para [0105] “In the first exemplary embodiment, the shape of the target object 106 is recognized in consideration of only the static operation of the gripping unit (fingers). If the target object 106 is wrapped in a plastic bag or other cushioning materials (packaging materials), the end effector 105 in contact with the target object 106 is therefore not always able to fully compress the packaging materials. In such a case, the shape of the contents, i.e., the target object 106 itself is not always able to be measured.”, Para [0081] “…If the end effector 105 reaches the target position and the gripping unit is not detected to be in contact with the target object 106, a new target position is set to perform the operation again.”, Para [0036] “In the present exemplary embodiment, a change of the end effector 105 when the end effector 105 contacts a target object 106 is measured. The information processing apparatus 104 recognizes states of the target object 106 based on the measured change of the end effector 105. The states to be recognized include the gripping state of the target object 106 by the end effector 105, and the shape, position, and orientation of the target object 106.”), he does not explicitly spell out the robot system according to claim 5, comprising a second robot arm separate from the robot arm, wherein the processor is configured to cause the second robot arm to move between the target object and the illuminator in a case where the type of the physical object has not been identified. Kay teaches the robot system according to claim 5, comprising a second robot arm separate from the robot arm (See at least Fig 5, Para [0050] “…For example, as shown in FIG. 5, 3D camera 16 could be used to control robotic arm 10, and a second arm 70…”), wherein the processor is configured to cause the second robot arm to move between the target object and the illuminator in a case where the type of the physical object has not been identified (See at least Para [0050] “A system in accordance with the present invention could be implemented using any of a number of different configurations. For example, as shown in FIG. 5, 3D camera 16 could be used to control robotic arm 10, and a second arm 70. In this arrangement, arm 70 includes one or more active fiducials 72 which can be differentiated from those used on arm 10. Both fiducials 72 and the target object 74 to be manipulated by arm 70 must be within the FOV of 3D camera 16, or the system must be arranged such that the position of 3D camera 16 can be moved as needed to accommodate both arms. Controller 24 is arranged to process the spatial position information for both arms, and provides command signals 12 and 76 to arms 10 and 70, respectively, to control their movements.”). Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to combine the robot control apparatus of Yamaha with the teachings of Kay and include the feature of a second robot arm separate from the robot arm, wherein the processor is configured to cause the second robot arm to move between the target object and the illuminator in a case where the type of the physical object has not been identified, thereby provide accuracy and efficiency during object manipulation (See at least Para [0008] “A robotic arm and control system are presented which overcome the problems noted above, providing efficient and accurate effector positioning and movement under a variety of operating conditions.”). Claim(s) 7, 8, 9 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Yamada et al. (US2017282363A1) (Hereinafter Yamada) in view of Wang et al. (US 20190102873 A1) (Hereinafter Wang), and further in view of Suzuki (US 20130158947 A1). Regarding Claim 7, modified Yamada teaches all the elements of claim 1. Although Yamada teaches … a case where the type of the physical object has not been identified (See at least Para [0105] “In the first exemplary embodiment, the shape of the target object 106 is recognized in consideration of only the static operation of the gripping unit (fingers). If the target object 106 is wrapped in a plastic bag or other cushioning materials (packaging materials), the end effector 105 in contact with the target object 106 is therefore not always able to fully compress the packaging materials. In such a case, the shape of the contents, i.e., the target object 106 itself is not always able to be measured.”, Para [0081] “…If the end effector 105 reaches the target position and the gripping unit is not detected to be in contact with the target object 106, a new target position is set to perform the operation again.”, Para [0036] “In the present exemplary embodiment, a change of the end effector 105 when the end effector 105 contacts a target object 106 is measured. The information processing apparatus 104 recognizes states of the target object 106 based on the measured change of the end effector 105. The states to be recognized include the gripping state of the target object 106 by the end effector 105, and the shape, position, and orientation of the target object 106.”), he does not explicitly spell out the robot system according to claim 1, wherein the processor is configured to control the robot arm so that a state of the target object changes in a case where the type of the physical object has not been identified. Suzuki teaches the robot system according to claim 1, wherein the processor is configured to control the robot arm so that a state of the target object changes in a case where the type of the physical object has not been identified (See at least Para [0040] “… In the embodiment, the coarse position and orientation of the target object 103 is repeatedly corrected by an iterative operation so that the three-dimensional geometric model corresponds to the two-dimensional image.”). Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to combine the robot control apparatus of Yamaha with the teachings of Suzuki and include the feature of the processor being configured to control the robot arm so that a state of the target object changes in a case where the type of the physical object has not been identified, thereby provide accuracy and efficiency during object manipulation (See at least Para [0129] “… Avoiding the viewpoint position and orientation in which the curved portion looks like an edge can improve the accuracy of position and orientation measurement.”). Regarding Claim 8, modified Yamada teaches all the elements of claim 7. Although Yamada teaches … a case where the type of the physical object has not been identified (See at least Para [0105] “In the first exemplary embodiment, the shape of the target object 106 is recognized in consideration of only the static operation of the gripping unit (fingers). If the target object 106 is wrapped in a plastic bag or other cushioning materials (packaging materials), the end effector 105 in contact with the target object 106 is therefore not always able to fully compress the packaging materials. In such a case, the shape of the contents, i.e., the target object 106 itself is not always able to be measured.”, Para [0081] “…If the end effector 105 reaches the target position and the gripping unit is not detected to be in contact with the target object 106, a new target position is set to perform the operation again.”, Para [0036] “In the present exemplary embodiment, a change of the end effector 105 when the end effector 105 contacts a target object 106 is measured. The information processing apparatus 104 recognizes states of the target object 106 based on the measured change of the end effector 105. The states to be recognized include the gripping state of the target object 106 by the end effector 105, and the shape, position, and orientation of the target object 106.”), he does not explicitly spell out the robot system according to claim 7, wherein the processor is configured to control the robot arm so that an orientation of the target object changes in a case where the type of the physical object has not been identified. Suzuki teaches the robot system according to claim 7, wherein the processor is configured to control the robot arm so that an orientation of the target object changes in a case where the type of the physical object has not been identified (See at least Para [0040] “… In the embodiment, the coarse position and orientation of the target object 103 is repeatedly corrected by an iterative operation so that the three-dimensional geometric model corresponds to the two-dimensional image.”). Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to combine the robot control apparatus of Yamaha with the teachings of Suzuki and include the feature of the processor being configured to control the robot arm so that an orientation of the target object changes in a case where the type of the physical object has not been identified, thereby provide accuracy and efficiency during object manipulation (See at least Para [0129] “… Avoiding the viewpoint position and orientation in which the curved portion looks like an edge can improve the accuracy of position and orientation measurement.”). Regarding Claim 9, modified Yamada teaches all the elements of claim 7. Yamaha further teaches the robot system according to claim 7, wherein the processor is configured to control the robot arm so that a state of the packaging member changes in a case where the type of the physical object has not been identified (See at least Para [0106] “In contrast, according to the present exemplary embodiment, the external force applied to a target object 106 can be controlled to compress the packaging materials around the target object 106 while measuring the shape of the contents. With such a control, the shape of the target object 106 , the contents, can be accurately measured to increase the possibility of continuing operation even if the target object 106 is wrapped with cushioning materials such as a plastic bag.”, Para [0113] “For example, if the target object 106 to be gripped is wrapped with semitransparent or nontransparent packaging materials, the shape of the object inside the packaging materials is difficult to measure by a vision sensor. In the present exemplary embodiment, whether the target object 106 can be gripped is then determined by compressing the packaging materials around the target object 106 by the end effector 105 , measuring the shape of the object inside, and recognizing the gripping state.”). Regarding Claim 10, modified Yamada teaches all the elements of claim 9. Yamaha further teaches the robot system according to claim 9, wherein the processor is configured to control the robot arm so that swelling of the packaging member is suppressed in a case where the type of the physical object has not been identified (See at least Para [0106] “In contrast, according to the present exemplary embodiment, the external force applied to a target object 106 can be controlled to compress the packaging materials around the target object 106 while measuring the shape of the contents. With such a control, the shape of the target object 106, the contents, can be accurately measured to increase the possibility of continuing operation even if the target object 106 is wrapped with cushioning materials such as a plastic bag.”, Para [0113] “For example, if the target object 106 to be gripped is wrapped with semitransparent or nontransparent packaging materials, the shape of the object inside the packaging materials is difficult to measure by a vision sensor. In the present exemplary embodiment, whether the target object 106 can be gripped is then determined by compressing the packaging materials around the target object 106 by the end effector 105 , measuring the shape of the object inside, and recognizing the gripping state.”). Claim(s) 11 is rejected under 35 U.S.C. 103 as being unpatentable over Yamada et al. (US2017282363A1) (Hereinafter Yamada) in view of Wang et al. (US 20190102873 A1) (Hereinafter Wang), Suzuki (US 20130158947 A1), and further in view of Diankov et al. (US 20210387333 A1) (Hereinafter Diankov). Regarding Claim 11, modified Yamada teaches all the elements of claim 9. Although Yamada teaches … a case where the type of the physical object has not been identified (See at least Para [0105] “In the first exemplary embodiment, the shape of the target object 106 is recognized in consideration of only the static operation of the gripping unit (fingers). If the target object 106 is wrapped in a plastic bag or other cushioning materials (packaging materials), the end effector 105 in contact with the target object 106 is therefore not always able to fully compress the packaging materials. In such a case, the shape of the contents, i.e., the target object 106 itself is not always able to be measured.”, Para [0081] “…If the end effector 105 reaches the target position and the gripping unit is not detected to be in contact with the target object 106, a new target position is set to perform the operation again.”, Para [0036] “In the present exemplary embodiment, a change of the end effector 105 when the end effector 105 contacts a target object 106 is measured. The information processing apparatus 104 recognizes states of the target object 106 based on the measured change of the end effector 105. The states to be recognized include the gripping state of the target object 106 by the end effector 105, and the shape, position, and orientation of the target object 106.”), he does not explicitly spell out the robot system according to claim 9, wherein the processor is configured to control the robot arm so that the packaging member is extended in a case where the type of the physical object has not been identified. Diankov teaches the robot system according to claim 9, wherein the processor is configured to control the robot arm so that the packaging member is extended in a case where the type of the physical object has not been identified (See at least Para [0149] “… For the notified grip pose illustrated in FIG. 23A, the target package 112 can extend into the second vacuum region 117b…). Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to combine the robot control apparatus of Yamaha with the teachings of Diankov and include the feature of the processor being configured to control the robot arm so that the packaging member is extended in a case where the type of the physical object has not been identified, thereby provide accuracy and efficiency during object manipulation (See at least Para [0073] “… the robotic system 100 can efficiently implement a computer-learning process that can account for previously unknown or unexpected conditions (e.g., lighting conditions, unknown orientations, and/or stacking inconsistencies) and/or newly encountered packages. Accordingly, the robotic system 100 can reduce the failures resulting from “unknown” conditions/packages, associated human operator interventions, and/or associated task failures…”) Claim(s) 12 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Yamada et al. (US2017282363A1) (Hereinafter Yamada) in view of Wang et al. (US 20190102873 A1) (Hereinafter Wang), and further in view of Tanabe (US 20210176430 A1). Regarding claim 12, modified Yamada teaches all the elements of claim 1. Although Yamada teaches … a case where the type of the physical object has not been identified (See at least Para [0105] “In the first exemplary embodiment, the shape of the target object 106 is recognized in consideration of only the static operation of the gripping unit (fingers). If the target object 106 is wrapped in a plastic bag or other cushioning materials (packaging materials), the end effector 105 in contact with the target object 106 is therefore not always able to fully compress the packaging materials. In such a case, the shape of the contents, i.e., the target object 106 itself is not always able to be measured.”, Para [0081] “…If the end effector 105 reaches the target position and the gripping unit is not detected to be in contact with the target object 106, a new target position is set to perform the operation again.”, Para [0036] “In the present exemplary embodiment, a change of the end effector 105 when the end effector 105 contacts a target object 106 is measured. The information processing apparatus 104 recognizes states of the target object 106 based on the measured change of the end effector 105. The states to be recognized include the gripping state of the target object 106 by the end effector 105, and the shape, position, and orientation of the target object 106.”), he does not explicitly spell out the robot system according to claim 1, wherein the processor is configured to make a change to an environment different from an environment in which the image of the target object has been captured based on a learned model in which a coefficient has been decided in a supervised learning method in a case where the type of the physical object has not been identified. Tanabe teaches the robot system according to claim 1, wherein the processor is configured to make a change to an environment different from an environment in which the image of the target object has been captured based on a learned model in which a coefficient has been decided in a supervised learning method in a case where the type of the physical object has not been identified (See at least Para [0138] “Note that in a case where additional supervised learning is performed, the CPU 501 transmits the captured image to the server SV via the communication section 507. The CPU 501 receives a result of processing performed by the neural network processor 408 of the server SV, using a high-performance learned model, from the server SV, as teacher data, via the communication section 507. Then, the CPU 501 stores a learned model newly generated by this additional learning in the storage unit 506. Specifically, the additional learning is performed in a state in which values of the learned coefficient parameters of the existing learned model are used as the initial values of the weight coefficients, the bias values, etc. Then, the CPU 501 proceeds to a step S1103.”). Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to combine the robot control apparatus of Yamaha with the teachings of Tanabe and include the feature of the processor being configured to make a change to an environment different from an environment in which the image of the target object has been captured based on a learned model in which a coefficient has been decided in a supervised learning method in a case where the type of the physical object has not been identified, thereby provide accuracy and efficiency during object manipulation. Regarding claim 13, modified Yamada teaches all the elements of claim 12. However, Yamada does not explicitly spell out the robot system according to claim 12, wherein the processor is configured to change the learned model based on the environment that has changed and makes the change to the environment different from the environment in which the image of the target object has been captured based on the learned model after the change. Tanabe teaches the robot system according to claim 12, wherein the processor is configured to change the learned model based on the environment that has changed and makes the change to the environment different from the environment in which the image of the target object has been captured based on the learned model after the change (See at least Para [0143] “In the step S1107, the CPU 501 transmits the learned coefficient parameters of the learned model associated with the high-priority object after the change to the image capturing apparatus C via the communication section 507, followed by terminating the present process.”, Para [0144]). Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to combine the robot control apparatus of Yamaha with the teachings of Tanabe and include the feature of the processor being configured to change the learned model based on the environment that has changed and makes the change to the environment different from the environment in which the image of the target object has been captured based on the learned model after the change, thereby provide accuracy and efficiency during object manipulation. Claim(s) 14 is rejected under 35 U.S.C. 103 as being unpatentable over Yamada et al. (US2017282363A1) (Hereinafter Yamada) in view of Wang et al. (US 20190102873 A1) (Hereinafter Wang), and further in view of Wu et al. (US 20220114712 A1) (Hereinafter Wu). Regarding claim 14, modified Yamada teaches all the elements of claim 1. Although Yamada teaches … a case where the type of the physical object has not been identified (See at least Para [0105] “In the first exemplary embodiment, the shape of the target object 106 is recognized in consideration of only the static operation of the gripping unit (fingers). If the target object 106 is wrapped in a plastic bag or other cushioning materials (packaging materials), the end effector 105 in contact with the target object 106 is therefore not always able to fully compress the packaging materials. In such a case, the shape of the contents, i.e., the target object 106 itself is not always able to be measured.”, Para [0081] “…If the end effector 105 reaches the target position and the gripping unit is not detected to be in contact with the target object 106, a new target position is set to perform the operation again.”, Para [0036] “In the present exemplary embodiment, a change of the end effector 105 when the end effector 105 contacts a target object 106 is measured. The information processing apparatus 104 recognizes states of the target object 106 based on the measured change of the end effector 105. The states to be recognized include the gripping state of the target object 106 by the end effector 105, and the shape, position, and orientation of the target object 106.”), he does not explicitly spell out the robot system according to claim 1, wherein the processor is configured to perform first processing content in a case where the type of the physical object and performs second processing content different from the first processing content in a case where the type of the physical object has not been identified from an image obtained by capturing the target object after the first processing content is performed. Wu teaches the robot system according to claim 1, wherein the processor is configured to perform first processing content in a case where the type of the physical object and performs second processing content different from the first processing content in a case where the type of the physical object has not been identified from an image obtained by capturing the target object after the first processing content is performed (See at least Para [0112] “… a capture image may be obtained by replacing an image of a target region corresponding to a capture target in the fused image with an image of a target region corresponding to the capture target in the target visible image, which can ensure that a dynamic range of the capture image is improved so that visible contents of the capture image are increased in visibility, and further can ensure that a resolution of a local target region in the capture image is not reduced so that the local target region of the capture image is clear and clean in visibility, thereby improving a quality of the capture image”). Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to combine the robot control apparatus of Yamaha with the teachings of Wu and include the feature of the processor being configured to perform performs second processing content different from the first processing content in a case where the type of the physical object has not been identified from an image obtained by capturing the target object after the first processing content is performed, thereby provide accuracy and efficiency during object manipulation (See at least Para [0071] “… According to the systems and methods of the present disclosure, a target image of a target region may be generated based on information associated with an ROI in a first image (e.g., a visible image) and a fused image generated based on the first image and a second image (e.g., an infrared image), during which the fused image may be transmitted via a blanking bandwidth, which can effectively utilize both the advantages of the visible image and the infrared image and does not affect the image transmission process, thereby improving image quality and improving image processing efficiency”). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Hsieh et al. (US 20230127218 A1) teaches a method include receiving, from a camera, image data representing an object in an environment and determining, based on the image data, a vertical position within the image data of a bottom of the object Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHAHEDA HOQUE whose telephone number is (571)270-5310. The examiner can normally be reached Monday-Friday 8:00 am- 5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ramon Mercado can be reached at 571-270-5744. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHAHEDA HOQUE/Examiner, Art Unit 3658 /Ramon A. Mercado/Supervisory Patent Examiner, Art Unit 3658
Read full office action

Prosecution Timeline

Sep 17, 2024
Application Filed
Jan 15, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12569992
AUTOMATIC DETERMINATION OF ROBOT SETTLING STATES
2y 5m to grant Granted Mar 10, 2026
Patent 12539597
ROBOT SYSTEM, AND CONTROL METHOD FOR SAME
2y 5m to grant Granted Feb 03, 2026
Patent 12514143
AGRICULTURAL MACHINE, AGRICULTURAL WORK ASSISTANCE APPARATUS, AND AGRICULTURAL WORK ASSISTANCE SYSTEM
2y 5m to grant Granted Jan 06, 2026
Patent 12485538
METHOD AND SYSTEM FOR DETERMINING A WORKPIECE LOADING LOCATION IN A CNC MACHINE WITH A ROBOTIC ARM
2y 5m to grant Granted Dec 02, 2025
Patent 12479107
METHOD AND AN ASSEMBLY UNIT FOR PERFORMING ASSEMBLING OPERATIONS
2y 5m to grant Granted Nov 25, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
43%
Grant Probability
81%
With Interview (+37.9%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 58 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month