Prosecution Insights
Last updated: April 19, 2026
Application No. 18/783,177

IMAGING PLANNING DEVICE AND IMAGING PLANNING METHOD

Non-Final OA §102§103
Filed
Jul 24, 2024
Examiner
HE, YINGCHUN
Art Unit
2613
Tech Center
2600 — Communications
Assignee
Mitsubishi Electric Corporation
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
96%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
529 granted / 644 resolved
+20.1% vs TC avg
Moderate +14% lift
Without
With
+14.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
27 currently pending
Career history
671
Total Applications
across all art units

Statute-Specific Performance

§101
8.4%
-31.6% vs TC avg
§103
54.0%
+14.0% vs TC avg
§102
5.4%
-34.6% vs TC avg
§112
17.9%
-22.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 644 resolved cases

Office Action

§102 §103
DETAILED ACTION *Note in the following document: 1. Texts in italic bold format are limitations quoted either directly or conceptually from claims/descriptions disclosed in the instant application. 2. Texts in regular italic format are quoted directly from cited reference or Applicant’s arguments. 3. Texts with underlining are added by the Examiner for emphasis. 4. Texts with 5. Acronym “PHOSITA” stands for “Person Having Ordinary Skill In The Art”. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1 and 4 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Fan et al , (“Automated view and Path Planning for Scalable Multi-Object 3D Scanning”, ACM Transactions on Graphic, Vol 35, No 6. Article 239, November 2016, pages 238:1-2391:3 (13 pages total). Regarding Claim 1, Fan discloses an imaging planning device (Fig.1(a): System Layout) comprising processing circuitry to apply a virtual solid figure having a predetermined shape to a virtual model object imitating a shape of a solid object for which a three-dimensional model is to be generated (p.239:2 right col. last paragraph: Our system, in contrast, optimizes the number and position of views using a low-resolution overview model acquired with a set of webcams. P.239:4 right col. first paragraph: Our view planning approach (discussed below) is based upon selecting a subset of candidate views that provide sufficient coverage of the 3D surface of an object. To generate these candidate views, we first fit an elliptical cylinder to the approximate object model, then dilate the ellipse by several different amounts corresponding approximately to the scanner’s “standoff” (i.e., the distance between the camera position and points ranging from the front to the back of the scanner’s working volume). The virtual elliptical cylinder is interpreted as the virtual solid figure have a predetermined shape to the virtual model object imitating a shape of the solid object. See p.239:2 right col. second last paragraph: Capturing from multiple view points is necessary for a scanner to acquire a complete and high-fidelity 3D model of an object), to perform repetition of arrangement of a position of at least one virtual camera on a basis of a position on a surface of the virtual solid figure (p.239:6 right col. l last three lines to p.239:7 left col. first paragraph: Sequential greedy optimization. An intuitive way of optimizing our objective function is using the classic greedy approach. In fact, there are inapproximability results [Feige 1998] showing that the sequential greedy approach is the best possible polynomial-time approximation algorithm for set cover. In our scenario, we begin with V*= PNG media_image1.png 20 18 media_image1.png Greyscale ; and iteratively add the view that yields the largest increase in the objective function. Also see p.239:4 right col. Section Candidate scanner views. PNG media_image2.png 344 550 media_image2.png Greyscale ), imaging of the virtual model object by the at least one virtual camera at a position after the arrangement (p.239:4 right col. last paragraph: Given an approximate object model provided by scene exploration, we begin by defining a view quality function that measures how well a 3D point on the object surface p is “seen” by a single scanner view v), and an increase in the number of the at least one virtual camera (p.239:7 left col. first paragraph: In our scenario, we begin with V*= PNG media_image1.png 20 18 media_image1.png Greyscale ; and iteratively add the view that yields the largest increase in the objective function), to determine the number and positions of the at least one virtual camera required to acquire a plurality of virtual images regarding the virtual model object required to generate a three-dimensional model of the virtual model object obtained by the repetition (p.239:7 left col. first paragraph: PNG media_image3.png 162 535 media_image3.png Greyscale ), and to plan, on a basis of the number and the positions of the at least one virtual camera, the number and positions of cameras required to acquire a plurality of images regarding the solid object that are to be required to generate the three-dimensional model of the solid object (p.239:8 left col. second paragraph: We propose a novel positioning system that is designed to support efficient 3D acquisition of multiple objects. Motion of the system is calibrated so that the scanner is able to arrive at desired poses based on the view planning results. P.239:3 right col. last paragraph: The objects are placed on the scanning platform, which is covered in black cloth for ease of object segmentation. Four static, calibrated webcams positioned around the platform capture images of the scene from above. Also see p.239:2 right col. last paragraph: Our system, in contrast, optimizes the number and position of views using a low-resolution overview model acquired with a set of webcams. The optimization is to be interested as planning step). Regarding Claim 4, Claim 4 is/are similar to Claim 1 except in the format of method. Therefore the same reason(s) for rejection is/are applied to Claim 1 is/are also applied to Claim 4. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Fan et al , (“Automated view and Path Planning for Scalable Multi-Object 3D Scanning”, ACM Transactions on Graphic, Vol 35, No 6. Article 239, November 2016, pages 238:1-2391:3 (13 pages total) as applied to Claim 1 above, and further in view of Vasquez-Gomez et al. (“View Planning for 3D Object Reconstruction”, The 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, Oct 11-15, 2009). Regarding Claim 2, Fan fails to disclose wherein the processing circuitry further generates the virtual solid figure by combining at least one virtual part out of a plurality of virtual parts that can be used to form the virtual solid figure. However Vasquez-Gomez, in the same field of endeavor, discloses generates the virtual solid figure by combining at least one virtual part out of a plurality of virtual parts that can be used to form the virtual solid figure (p.4016 Section III and especially understand Algorithm 1: PNG media_image4.png 308 399 media_image4.png Greyscale Vasquez-Gomez teaches combining multiple range images to form a virtual solid figure that best represents the solid 3D object model). Therefore it would have been obvious to a PHOSITA before the effective filing date to incorporate the teaching of Vasquez-Gomez into that of Fan and to include the limitation of wherein the processing circuitry further generates the virtual solid figure by combining at least one virtual part out of a plurality of virtual parts that can be used to form the virtual solid figure in order to allow users to evaluate how good a view can be based on area percentage, quality and navigation distance as suggested by Vasquez-Gomez (p.4015 right col. lines 6-7). Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Fan et al , (“Automated view and Path Planning for Scalable Multi-Object 3D Scanning”, ACM Transactions on Graphic, Vol 35, No 6. Article 239, November 2016, pages 238:1-2391:3 (13 pages total) as applied to Claim 1 above, and further in view of Sun et al. (“Learning View Selection for 3D Scenes”, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)). Regarding Claim 3, Fan discloses optimizing the number and position of views using a low-resolution overview model acquired with a set of webcams (p.239:2 right col. last paragraph). But Fan fails to disclose the optimization is based on a binary search therefore the limitation of wherein the processing circuitry further sets the number of cameras determined on a basis of a minimum number of the cameras to be prepared and a maximum number of the cameras that can be prepared as an initial value of the number of the at least one virtual camera instead of the increase in the number of the at least one virtual camera, and then increases or decreases the number of the at least one virtual camera on a basis of binary search. However Sun, in the same field of endeavor, discloses determining an optimal number of cameras by performing a binary search (P.144654 left col. lines 9-12: We determine the optimal number of cameras by performing a binary search, i.e., starting from a sufficiently large value for n, and find the smallest n so that the coverage ratio is above 90%.). A PHOSITH before the effective filing date of the claimed invention would have known binary search is to determine a number by searching the number between a minimum and maximum numbers. Therefore it would have been obvious to a PHOSITA before the effective filing date to incorporate the teaching of Sun into that of Fan and to include the limitation of wherein the processing circuitry further sets the number of cameras determined on a basis of a minimum number of the cameras to be prepared and a maximum number of the cameras that can be prepared as an initial value of the number of the at least one virtual camera instead of the increase in the number of the at least one virtual camera, and then increases or decreases the number of the at least one virtual camera on a basis of binary search in order to optimize the camera pose as suggested by Sun (p.14462 left col. lines 1-4). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Chen (“Binary Search Applications”, downloaded @https://activities.tjhsst.edu/sct/lectures/1112/binary102111.pdf, Oct. 2011) teaches Binary Search is a divide-and-conquer algorithm. At each iteration, we divide the list in half and search from only half of the list for future iterations. How do we accomplish this? First, we must sort the list of N integers. Then, at each step, we check the median of the N integers (the number with the middle index). If this number is greater than the query, then we know to only search in the left half of the list. Else, we search in the right half (p.1 second paragraph). Any inquiry concerning this communication or earlier communications from the examiner should be directed to YINGCHUN HE whose telephone number is (571)270-7218. The examiner can normally be reached M-F 8:00-5:00 MT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao M Wu can be reached at 571-272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /YINGCHUN HE/Primary Examiner, Art Unit 2613
Read full office action

Prosecution Timeline

Jul 24, 2024
Application Filed
Jan 02, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602886
LOW LATENCY HAND-TRACKING IN AUGMENTED REALITY SYSTEMS
2y 5m to grant Granted Apr 14, 2026
Patent 12588711
METHOD AND APPARATUS FOR OUTPUTTING IMAGE FOR VIRTUAL REALITY OR AUGMENTED REALITY
2y 5m to grant Granted Mar 31, 2026
Patent 12586247
IMAGE DISTORTION CALIBRATION DEVICE, DISPLAY DEVICE AND DISTORTION CALIBRATION METHOD
2y 5m to grant Granted Mar 24, 2026
Patent 12586491
Display Device and Method for Driving the Same
2y 5m to grant Granted Mar 24, 2026
Patent 12579949
IMAGE PROCESSING APPARATUS
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
96%
With Interview (+14.4%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 644 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month