Prosecution Insights
Last updated: April 19, 2026
Application No. 18/648,485

MOVABLE OBJECT, MOVABLE OBJECT IMAGING SYSTEM, AND MOVABLE OBJECT IMAGING METHOD

Non-Final OA §102§103
Filed
Apr 29, 2024
Examiner
ALAVI, AMIR
Art Unit
2668
Tech Center
2600 — Communications
Assignee
Fujifilm Corporation
OA Round
1 (Non-Final)
94%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
97%
With Interview

Examiner Intelligence

Grants 94% — above average
94%
Career Allow Rate
1083 granted / 1156 resolved
+31.7% vs TC avg
Minimal +4% lift
Without
With
+3.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
23 currently pending
Career history
1179
Total Applications
across all art units

Statute-Specific Performance

§101
23.0%
-17.0% vs TC avg
§103
20.2%
-19.8% vs TC avg
§102
19.5%
-20.5% vs TC avg
§112
12.9%
-27.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1156 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-2, 4-6 and 8-17 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Maruyama (USPAP 2018/0215,044). Regarding claim 1, Maruyama recites, a movable object body (Please note, figure 1, a Robot); an imaging apparatus that captures an image of an object (Please note, figure 1 in correlation to paragraph 0052. As indicated the first imaging unit 21 is a camera); a processor (Please note, Abstract of the invention. As indicated an image processing device includes a processor) that acquires imaging position information and imaging posture information of the imaging apparatus in a case where the image of the object is captured by the imaging apparatus (Please note, paragraph 0074. As indicated the robot control device 30 operates the robot 20 so that a position and a posture of the first imaging unit 21 coincide with a predetermined imaging position and imaging posture.); and a storage device that stores the imaging position information and the imaging posture information. (Please note, paragraph 0102. As indicated the robot control unit 363 reads imaging position/posture information stored in advance in the storage unit 32 from the storage unit 32.). Regarding claim 2, Maruyama recites, wherein the processor is configured to receive an imaging instruction based on the imaging position information and the imaging posture information. (Please note, paragraph 0102. As indicated the robot control unit 363 moves the first imaging unit 21 by operating the robot 20, and causes the position and the posture of the first imaging unit 21 to coincide with the imaging position and the imaging posture which are indicated by the read imaging position/posture information.). Regarding claim 4, Maruyama recites, wherein the imaging instruction includes an imaging parameter for the imaging apparatus. (Please note paragraph 0073. As indicated the robot control device 30 performs calibration using a calibration plate before causing the robot 20 to carry out the predetermined work. The calibration is performed in order to calibrate an external parameter and an internal parameter of the first imaging unit 21. Specifically, the calibration is performed in order to associate a position on the image captured by the first imaging unit 21 and a position in a robot coordinate system RC with each other. That is, when causing the first imaging unit 21 to perform imaging, the robot control device 30 causes the first imaging unit 21 to perform the imaging inside a region whose parameter is adjusted by performing the calibration). Regarding claim 5, Maruyama recites, wherein the processor is configured to generate an imaging route for the received imaging instruction. (Please note, paragraph 0034. As indicated figure 4 is a flowchart illustrating an example of a flow in a calibration process performed by the robot control device.). Regarding claim 6, Maruyama recites, wherein the processor is configured to control the imaging apparatus on the basis of the imaging instruction. (Please note, paragraph 0066. As indicated the robot control device 30 is a controller which controls (operates) the robot 20. For example, the robot control device 30 generates a control signal based on an operation program stored in advance. The robot control device 30 outputs the generated control signal to the robot 20, and causes the robot 20 to carry out predetermined work.). Regarding claim 8, Maruyama recites, wherein the movable object body is operated remotely or autonomously. (Please note, paragraph 0064. As indicated a configuration may be adopted in which the fourth imaging unit 24 is connected to the robot control device 30 by using the wireless communication performed in accordance with communication standard such as Wi-Fi (registered trademark).). Regarding claim 9, Maruyama recites, wherein the movable object body is an unmanned flying object or a mobile robot. (Please note, paragraph 0066. As indicated the robot control device 30 is a controller which controls (operates) the robot 20. For example, the robot control device 30 generates a control signal based on an operation program stored in advance. The robot control device 30 outputs the generated control signal to the robot 20, and causes the robot 20 to carry out predetermined work.). Regarding claim 10, Maruyama recites, wherein the imaging apparatus acquires a two-dimensional color image. (Please note, paragraph 0009. As indicated the image processing device may adopt a configuration in which the image captured by the imaging unit is a two-dimensional image.). Regarding claim 11, Maruyama recites, wherein the imaging apparatus acquires three-dimensional data. (Please note, paragraph 0078. As indicated the posture is represented by a direction in the robot coordinate system RC of each coordinate axis in a three-dimensional local coordinate system associated with the center of gravity of the object O. Alternatively, a configuration may be adopted in which the posture is represented by other directions associated with the object O. The robot coordinate system RC is the robot coordinate system of the robot 20. The image processing device 40 calculates the position of the object O, based on the image. For example, the position is represented by a position in the robot coordinate system RC of the origin in the three-dimensional local coordinate system.). Regarding claim 12, Maruyama recites, wherein the movable object is capable of communicating with an information processing device (Please note, paragraph 0048. As indicated the first end effector E1 is connected to the robot control device 30 via a cable so as to be capable of communicating therewith. In this manner, the first end effector E1 performs an operation based on a control signal acquired from the robot control device 30. For example, wired communication via the cable is performed in accordance with standards such as Ethernet (registered trademark) and a universal serial bus (USB). A configuration may be adopted in which the first end effector E1 is connected to the robot control device 30 by using wireless communication performed in accordance with communication standards such as Wi-Fi (registered trademark).), and the processor is configured to cause the movable object body to move to an imaging position based on the imaging position information and the imaging posture information stored in the storage device in a case where the processor receives an imaging instruction from the information processing device, and capturing is performed by the imaging apparatus. (Please note, paragraph 0102. As indicated the robot control unit 363 reads imaging position/posture information stored in advance in the storage unit 32 from the storage unit 32. The imaging position/posture information indicates the above-described imaging position and imaging posture. The robot control unit 363 moves the first imaging unit 21 by operating the robot 20, and causes the position and the posture of the first imaging unit 21 to coincide with the imaging position and the imaging posture which are indicated by the read imaging position/posture information.). Regarding claim 13, Maruyama recites, wherein the processor is configured to receive from an outside of the movable object, the imaging instruction based on the imaging position information and the imaging posture information which have already been stored in the storage device. (Please note, paragraph 0102. As indicated the robot control unit 363 may have a configuration in which the imaging posture is stored in advance. In this case, in Step S110, the robot control unit 363 reads the imaging position information stored in advance in the storage unit 32 from the storage unit 32. The imaging position information indicates the above-described imaging position.). Regarding claim 14, Maruyama recites, wherein the imaging instruction includes identification information for specifying a captured image for which re-imaging is necessary (Please note, paragraph 0068. As indicated the robot 20 partially or entirely causes the first imaging unit 21 to the fourth imaging unit 24 to image an object O disposed inside a work region of the robot 20. Hereinafter, as an example, a case where the robot 20 causes the first imaging unit 21 to image the object O will be described. The robot 20 may be configured to cause an imaging unit separate from the robot 20 to image the object O. In this case, the robot system 1 includes the imaging unit. The imaging unit is installed at a position where the object O can be imaged.), and the processor is configured to acquire the imaging position information and the imaging posture information when the captured image was captured, from the storage device based on the identification information. (Please note, paragraph 0102. As indicated the robot control unit 363 reads imaging position/posture information stored in advance in the storage unit 32 from the storage unit 32. The imaging position/posture information indicates the above-described imaging position and imaging posture. The robot control unit 363 moves the first imaging unit 21 by operating the robot 20, and causes the position and the posture of the first imaging unit 21 to coincide with the imaging position and the imaging posture which are indicated by the read imaging position/posture information.). Regarding claims 15-16, analysis similar to those presented for claim 1, above are applicable. Regarding claim 17, Maruyama recites, a step of re-imaging the object on the basis of the imaging position information and the imaging posture information. (Please note, paragraph 0112. As indicated the robot control unit 363 reads the imaging position/posture information stored in advance in the storage unit 32 from the storage unit 32. The robot control unit 363 moves the first imaging unit 21 by operating the robot 20, and causes the position and the posture of the first imaging unit 21 to coincide with the imaging position and the imaging posture which are indicated by the read imaging position/posture information (Step S210). A configuration may be adopted as follows. The robot control unit 363 does not read the imaging position/posture information from the storage unit 32 in Step S210, and causes the position and posture of the first imaging unit 21 to coincide with the imaging position and the imaging posture which are indicated by the imaging position/posture information read from the storage unit 32 in Step S110.). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Maruyama (USPAP 2018/0215,044), in view of Horita (USPAP 2019/0222,751). Regarding claim 3 Maruyama recites, a movable object body (Please note, figure 1, a Robot); an imaging apparatus that captures an image of an object (Please note, figure 1 in correlation to paragraph 0052. As indicated the first imaging unit 21 is a camera); a processor (Please note, Abstract of the invention. As indicated an image processing device includes a processor) that acquires imaging position information and imaging posture information of the imaging apparatus in a case where the image of the object is captured by the imaging apparatus (Please note, paragraph 0074. As indicated the robot control device 30 operates the robot 20 so that a position and a posture of the first imaging unit 21 coincide with a predetermined imaging position and imaging posture.); and a storage device that stores the imaging position information and the imaging posture information. (Please note, paragraph 0102. As indicated the robot control unit 363 reads imaging position/posture information stored in advance in the storage unit 32 from the storage unit 32.).. Maruyama does not expressly teach, wherein the imaging instruction includes a correction amount for at least one of the imaging position information or the imaging posture information. Horita teaches, wherein the imaging instruction includes a correction amount for at least one of the imaging position information or the imaging posture information. (Please note, paragraph 0137. As indicated the imaging plan adjustment unit 419 adjusts the deck slab imaging positions or the steel member imaging positions and postures in the imaging plan generated by the imaging plan generation unit 407 based on an adjustment command. The adjustment command is received by, for example, an adjustment command reception unit (not shown), and is a command for adjusting the deck slab imaging positions, the deck slab imaging postures, the steel member imaging positions, or the steel member imaging postures. The adjustment command reception unit is implemented by, for example, the input unit 330.). Maruyama & Horita are combinable because they are from the same field of endeavor. At the time before the effective filing date, it would have been obvious to a person of ordinary skill in the art to utilize this correction operation of Horita in Maruyama’s invention. The suggestion/motivation for doing so would have been as indicated on paragraph 0136, “the imaging plan correction unit 417 optimizes an imaging sequence such that a total imaging time or a total moving distance becomes the shortest”. Therefore, it would have been obvious to combine Horita with Maruyama to obtain the invention as specified in claim 3. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Maruyama (USPAP 2018/0215,044), in view of Liu (CN 110567458 A). Regarding claim 7 Maruyama recites, a movable object body (Please note, figure 1, a Robot); an imaging apparatus that captures an image of an object (Please note, figure 1 in correlation to paragraph 0052. As indicated the first imaging unit 21 is a camera); a processor (Please note, Abstract of the invention. As indicated an image processing device includes a processor) that acquires imaging position information and imaging posture information of the imaging apparatus in a case where the image of the object is captured by the imaging apparatus (Please note, paragraph 0074. As indicated the robot control device 30 operates the robot 20 so that a position and a posture of the first imaging unit 21 coincide with a predetermined imaging position and imaging posture.); and a storage device that stores the imaging position information and the imaging posture information. (Please note, paragraph 0102. As indicated the robot control unit 363 reads imaging position/posture information stored in advance in the storage unit 32 from the storage unit 32.).. Maruyama does not expressly teach, wherein the processor is configured to acquire the imaging position information from a positioning sensor provided in the movable object body, and the imaging posture information from an inertia measurement sensor provided in the movable object body. Liu teaches, wherein the processor is configured to acquire the imaging position information from a positioning sensor provided in the movable object body, and the imaging posture information from an inertia measurement sensor provided in the movable object body. (Please note, page 7, next to the last paragraph. As indicated wherein, multi-sensor data can be by the following robot in at least two sensors collecting inertia measurement component sensor, a GPS positioning sensor, a vision sensor, a laser radar sensor, ultra-wideband (UWB) sensor and an encoder and the like). Maruyama & Liu are combinable because they are from the same field of endeavor. At the time before the effective filing date, it would have been obvious to a person of ordinary skill in the art to utilize this sensor operation of Liu in Maruyama’s invention. The suggestion/motivation for doing so would have been as indicated on page 7, last paragraph, “selecting the effective sensor data for positioning.”. Therefore, it would have been obvious to combine Liu with Maruyama to obtain the invention as specified in claim 7. Examiner’s Note The examiner cites particular figures, paragraphs, columns and line numbers in the references as applied to the claims for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claims, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMIR ALAVI whose telephone number is (571)272-7386. The examiner can normally be reached on M-F from 8:00-4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vu Le can be reached at (571)272-7332. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AMIR ALAVI/Primary Examiner, Art Unit 2668 Saturday, February 7, 2026
Read full office action

Prosecution Timeline

Apr 29, 2024
Application Filed
Feb 07, 2026
Non-Final Rejection — §102, §103
Apr 06, 2026
Applicant Interview (Telephonic)
Apr 06, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597232
SYSTEM FOR LEARNING NEW VISUAL INSPECTION TASKS USING A FEW-SHOT META-LEARNING METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12573189
PROCESSING METHOD AND PROCESSING DEVICE USING SAME
2y 5m to grant Granted Mar 10, 2026
Patent 12567238
GENERATING A DATA STRUCTURE FOR SPECIFYING VISUAL DATA SETS
2y 5m to grant Granted Mar 03, 2026
Patent 12561950
AI System and Method for Automatic Analog Gauge Reading
2y 5m to grant Granted Feb 24, 2026
Patent 12561774
SYSTEM AND METHOD FOR REAL-TIME TONE-MAPPING
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
94%
Grant Probability
97%
With Interview (+3.6%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 1156 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month