Prosecution Insights
Last updated: April 19, 2026
Application No. 18/561,176

INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND PROGRAM

Non-Final OA §103
Filed
Nov 15, 2023
Examiner
MENDEZ MUNIZ, DYLAN JOHN
Art Unit
2675
Tech Center
2600 — Communications
Assignee
Omron Corporation
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
15 granted / 18 resolved
+21.3% vs TC avg
Strong +25% interview lift
Without
With
+25.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
15 currently pending
Career history
33
Total Applications
across all art units

Statute-Specific Performance

§101
16.3%
-23.7% vs TC avg
§103
44.8%
+4.8% vs TC avg
§102
17.7%
-22.3% vs TC avg
§112
21.3%
-18.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 18 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statements (IDS) were filed on 11/15/2023 and 05/22/2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: “measurement apparatus configured to…”, “imaging controller configured to…”, “data generator configured to…”, “range image sensor configured to…”, “data generator configured to…”, “determiner configured to…” in claim 1, “a path generator configured to…” in claim 4, “an obtainer configured to…” in claim 6, “a controller configured to…” in claim 8. Because these claim limitations are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-5, 7, 11 and 13-15 are rejected under 35 U.S.C. 103 as being unpatentable over Hepp et. al., hereafter Hepp (Hepp, Benjamin, Matthias Nießner, and Otmar Hilliges. "Plan3d: Viewpoint and trajectory optimization for aerial multi-view stereo reconstruction." ACM Transactions on Graphics (TOG) 38.1 (2018): 1-17.) in view of Martins et. al., hereafter Martins (Martins, Fernando António Rodrigues, et al. "Automated 3D surface scanning based on CAD model." Mechatronics 15.7 (2005): 837-857.). As per claim 1, Hepp teaches “an information processing system for a measurement apparatus configured to perform measurement of an object three-dimensionally using a plurality of range images obtained at different imaging positions by a.. image sensor configured to obtain a range image using a principle of triangulation, the information processing system being a system for determining imaging positions at which M range images that should be used in the measurement of the object are captured, the information processing system comprising: (See page 2 fig. 1 and page 5 fig. 2, it shows an object (house or building) being reconstructed three dimensionally. See also page 2 column 1 paragraph 1 “Images from a commodity RGB camera, mounted on an autonomously navigated quadcopter, are fed into a multi-view stereo reconstruction pipeline that produces high-quality results but is computationally expensive.” The multi view stereo reconstruction is a use of a principle of triangulation. See page 6 section 4.1 “The high-level goal is to find an optimized subset of viewpoints from a larger set of candidate views that maximizes the information gained about the 3D surface of the scene… Each viewpoint v ∈ C has an associated position and orientation denoted as v.p and v.q. ” The viewpoints are interpreted as the imaging positions and the optimized subset of viewpoints (as also seen in the figures) is used for taking M range images at M imaging positions in order to reconstruct the object. Hepp) an imaging controller configured to obtain… range images of a teaching object by using the range image sensor from N imaging positions, where N > M; (See page 6 section 4.1 “The high-level goal is to find an optimized subset of viewpoints from a larger set of candidate views that maximizes the information gained about the 3D surface of the scene. We assume that we are given a graph G = (C,M) of viewpoint candidates C alongside observed voxels and motions M between viewpoints as edges. Each viewpoint v ∈ C has an associated position and orientation denoted as v.p and v.q… ” The optimized viewpoints are M and the larger set of candidate viewpoints are N viewpoints. See also fig. 2 on page 2 (B) shows the candidate viewpoints and (C) shows the optimized set of viewpoints. See also page 7-8 section 4.4 Viewpoint Candidate graph. Since each viewpoint contains the observed voxels, it implicitly teaches obtaining N range images at N imaging positions. Hepp) a data generator configured to generate a plurality of composite data pieces from a plurality of different combinations of the… range images; and (See page 5 section 3 System overview paragraph 2 “..The quadrotor flies this regular pattern and records an initial set of images. These recordings are then processed via a state-of-the-art SfM and MVSpipeline bySchönberger et al. (2016a, 2016b) to attain camera poses together with depth and normal maps for each viewpoint. To generate a 3D surface reconstruction, the depth maps are fused into a dense point cloud, and, utilizing the Poisson Surface Reconstruction method (Kazhdan and Hoppe 2013), a mesh is extracted (Figure 2(a))… In addition to the initial reconstruction, we compute a volumetric occupancy map containing occupied, free-space, and unobserved voxels. Each voxel also carries with it a measure of observation quality.” A dense point cloud (contains a plurality of composite data pieces) is created from a set of initial images which utilizes voxels. A voxel can represent a point in the dense point cloud. It is also used in equations 1-8. Hepp) a determiner configured to calculate accuracy of each of the plurality of composite data pieces indicating a degree of matching between the composite data piece and the teaching object (See page 6 equations 1-6, most importantly 4-6 which show the represent voxel information (point in a point cloud) which is used for stereo matching. Then see page 8 section 4.4.1 Viewpoint information along with equations 7 and 8. See also page 5 column 1 penultimate paragraph and final paragraph “In addition to the initial reconstruction, we compute a volumetric occupancy map containing occupied, free-space, and unobserved voxels. Each voxel also carries with it a measure of observation quality. The occupancy map (Figure 2(b)) is used during planning to reason about free-space and collision freedom as well as approximation of the observable surface area from any given viewpoint and the (remaining) uncertainty about the scene. The main objective of our optimization formulation is to maximize total information (i.e., certainty about voxels in the region of interest)… ” See also page 5 column 2 last paragraph along with the paragraph at the beginning of the following page “The occupancy map OM is essential in distinguishing between occupied, free, and unobserved space. This is encoded by an occupancy value oc(τ) ∈ [0,1] for each voxel τ ∈ OM. Here, we overload the term occupancy to encompass both a known occupancy and an unknown occupancy (i.e., a value close to 0 encodes a known empty voxel, a value close to 1 encodes a known occupied voxel, and a value close to 0.5 encodes an unknown voxel”. On page 6 equations 2-3 show that the occupancy value is utilized to determine when a voxel is occupied and this occupation value represents the accuracy of the current voxel in the point cloud representing different surfaces of the reconstruction model. Therefore this represents a degree of matching (variation from 0 to 1) (matched = known occupied = 1, the degree of matching being from 0 to 1. ) Hepp) and to determine M imaging positions from the N imaging positions based on the accuracy of each of the plurality of composite data pieces. (See section 4.1 on page 6 paragraph 1. “The high-level goal is to find an optimized subset of viewpoints from a larger set of candidate views that maximizes the information gained about the 3D surface of the scene.” The previous stated equations (1-8) are used to gain a smaller subet of viewpoints from a larger set of candidate viewpoints. See also page 7 section 4.3 Algorithm 1 and page 16 algorithms 2 and 3. See also fig. 1 and fig. 2. Hepp), however Hepp does not teach “a range image sensor” and “N range images”. Martins teaches “a range image sensor” (See page 4 paragraph 1 “Each viewpoint is completely defined by the optical range sensor orientation (view direction) and the associated volume to be scanned.”) and “N range images” (See page 8 paragraphs 2-4 “This viewpoint set searching process may not generate an optimum viewpoint set (in terms of the number of viewpoints), but guarantees that the viewpoint set is complete in the sense that, all possible measurable surface voxels will be covered by the viewpoint set.” And pages 8-10 section 4 Scanning path generation and surface scanning, on page 10 paragraph 2 “The process of scanning path generation for each viewpoint is executed off-line and is then used to drive the surface scanning process. In this last phase, the object surface is scanned according to the viewpoints and trajectories previously defined.” See also equations 1-8 on pages 5-7. On page 7 paragraph 4 “The function G(j) is computed to each viewpoint j ∈ V, where xs and xsc are weighting coefficients used to set the weight of each partial function and can range between 0.0 and 1.0 considering that xs + xsc =1;” Martins ) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Hepp with the teachings of Martins to utilize a range image sensor and obtain a first set of images. The modification would have been motivated by the desire to have the ability to scan free form surfaces and be able to acquire thousands of points per second, in addition, it guarantees all measurable surface voxels are complete, therefore it is an improvement, as suggested by Martins (See page 2 paragraph 1 “The recent development of fast and accurate non-contact optical 3D range sensors [15] offers the possibility to overcome some of the contact technology problems. The capability to acquire thousands of points per second and the ability to scan free form surfaces are the main advantages.” See also page 8 paragraphs 3-4 “This viewpoint set searching process may not generate an optimum viewpoint set (in terms of the number of viewpoints), but guarantees that the viewpoint set is complete in the sense that, all possible measurable surface voxels will be covered by the viewpoint set.” Martins) Claims 14 and 15 are rejected under the same analysis as claim 1. (See page 15 section recursive greedy method, the system utilizes storage “To speed up this computation, we keep a separate list that stores for each viewpoint a tuple of the viewpoint index and the corresponding information gain given the current viewpoint set.” This shows that the aerial robot has the ability to store the algorithms and data offline (see page 6 section 4 Method “One difficulty that presents itself is that the final objective of 3D reconstruction quality cannot be measured directly due to the absence of ground-truth data and the offline nature of the SfM and MVS pipeline.”) In addition it is well known in the art that computers and robots contain storage to perform actions and processes. Therefore at least implicitly a storage is taught. Hepp) As per claim 2, Hepp in view of Martins teaches “The information processing system according to claim 1, wherein each of the plurality of composite data pieces is point cloud data representing the object in a three-dimensional space using a point cloud, and the accuracy is a value in accordance to a variation in the point cloud with respect to the teaching object. (See also page 5 column 2 last paragraph along with the paragraph at the beginning of the following page “The occupancy map OM is essential in distinguishing between occupied, free, and unobserved space. This is encoded by an occupancy value oc(τ) ∈ [0,1] for each voxel τ ∈ OM. Here, we overload the term occupancy to encompass both a known occupancy and an unknown occupancy (i.e., a value close to 0 encodes a known empty voxel, a value close to 1 encodes a known occupied voxel, and a value close to 0.5 encodes an unknown voxel”. On page 6 equations 2-3 show that the occupancy value is utilized to determine when a voxel is occupied and this occupation value represents the accuracy of the current voxel in the point cloud representing different surfaces of the reconstruction model. Therefore this represents a degree of matching (variation from 0 to 1) (matched = known occupied = 1, the degree of matching being from 0 to 1. ). See also page 5 section 3 System overview paragraph 2 “..The quadrotor flies this regular pattern and records an initial set of images. These recordings are then processed via a state-of-the-art SfM and MVSpipeline bySchönberger et al. (2016a, 2016b) to attain camera poses together with depth and normal maps for each viewpoint. To generate a 3D surface reconstruction, the depth maps are fused into a dense point cloud, and, utilizing the Poisson Surface Reconstruction method (Kazhdan and Hoppe 2013), a mesh is extracted (Figure 2(a))… In addition to the initial reconstruction, we compute a volumetric occupancy map containing occupied, free-space, and unobserved voxels. Each voxel also carries with it a measure of observation quality. See also fig. 1 and fig. 2. Hepp) As per claim 3, Hepp in view of Martins teaches “The information processing system according to claim 2, wherein the variation in the point cloud is based on variations of points in the plurality of composite data pieces with respect to a predetermined reference surface and on variations of points in the plurality of composite data pieces with respect to a surface of the teaching object.” (See also page 5 column 2 last paragraph along with the paragraph at the beginning of the following page “The occupancy map OM is essential in distinguishing between occupied, free, and unobserved space. This is encoded by an occupancy value oc(τ) ∈ [0,1] for each voxel τ ∈ OM. Here, we overload the term occupancy to encompass both a known occupancy and an unknown occupancy (i.e., a value close to 0 encodes a known empty voxel, a value close to 1 encodes a known occupied voxel, and a value close to 0.5 encodes an unknown voxel”. As seen on page 5 column 1 paragraphs 3-4 “First, a user defines a Region of Interest (ROI) and specifies a simple and safe overhead pattern via a map-based interface to ac quire an initial set of images… To generate a 3D surface reconstruction, the depth maps are fused into a dense point cloud, and, utilizing the Poisson Surface Reconstruction method (Kazhdan and Hoppe 2013), a mesh is extracted (Figure 2(a)). It is important to note that this initial reconstruction is highly inaccurate and incomplete since the viewpoints stem from a simple, regular pattern flown at relatively high altitude to avoid collisions. ” See also fig. 2. This initial reconstruction is considered a predetermined reference surface using variations (from the initial occupancy) and then variations (occupancy from different viewpoints) are also used to determine the final occupancy of the voxel. Therefore the final variation of the point cloud is based on the initial and following occupancy values. Hepp ) As per claim 4, Hepp in view of Martins teaches “The information processing system according to claim 1, further comprising: a path generator configured to generate a movement path for changing a position of the range image sensor to the M imaging positions to allow the range image sensor to complete imaging the object at the M imaging positions in a shortest time.” (See fig. 2 along with page 6 section 4.1 Optimizing Viewpoint Trajectories paragraphs 1-5 “The high-level goal is to find an optimized subset of viewpoints from a larger set of candidate views that maximizes the information gained about the 3D surface of the scene. We assume that we are given a graph G = (C,M) of viewpoint candidates C alongside observed voxels and motions M between viewpoints as edges… The goal of the method is to generate a trajectory (i.e., a path through a subset of the nodes in the candidate graph) for the quadcopter that yields good reconstruction quality and fulfills robot constraints. ” See also page 5 column 1 paragraph 1 “For reasonable runtimes of 10 minutes, we observe that our approach consistently outperforms this method.” See also page 3 column 1 paragraph 2 “Given the emergence of small and affordable aerial robots (MAVs), equipped with high-resolution cameras, it is a natural choice to leverage these for image acquisition… Moreover,current MAVs are battery constrained to 10- to 15-minute flight times, making intelligent viewpoint selection an even more pressing issue.” As can be seen, 10 minutes is the shortest time of the presented battery constraints, therefore it falls within the BRI of “a shortest time”. Hepp) As per claim 5, Hepp in view of Martins teaches “The information processing system according to claim 1, wherein the determiner determines the M imaging positions based on the accuracy and time for imaging used by the range image sensor.” (See fig. 2 along with page 6 section 4.1 Optimizing Viewpoint Trajectories paragraphs 1-5 “The high-level goal is to find an optimized subset of viewpoints from a larger set of candidate views that maximizes the information gained about the 3D surface of the scene. We assume that we are given a graph G = (C,M) of viewpoint candidates C alongside observed voxels and motions M between viewpoints as edges… The goal of the method is to generate a trajectory (i.e., a path through a subset of the nodes in the candidate graph) for the quadcopter that yields good reconstruction quality and fulfills robot constraints. ” See also page 5 column 1 paragraph 1 “For reasonable runtimes of 10 minutes, we observe that our approach consistently outperforms this method.” Since it uses equations 1-8, it is also based on the accuracy by the occupancy of the voxels in the point cloud. Hepp) As per claim 7, Hepp in view of Martins teaches “The information processing system according to claim 1, wherein the range image sensor has a predetermined upper limit for time for imaging, and the M is a greatest number of range images to be captured by the range image sensor until the time for imaging used by the range image sensor reaches the predetermined upper limit.” (See page 11 column 2 section 5.5.1 Church “Church. Figure 7 shows results for the church scene, acquired with a total of 160 images. The initial flight pattern uses 20 viewpoints arranged in an ellipse. Based on the initial reconstruction, a viewpoint path with 140 viewpoints and a maximum flight time of 10 minutes was planned (see Figure 1) and flown.” The 140 viewpoints is the variable M (greatest number of range images) since it accounts for M images and in this experiment the upper limit is 10 minutes. Hepp ) As per claim 11, Hepp in view of Martins teaches “The information processing system according to claim1, wherein the imaging controller further obtains range images of the teaching object (50A) captured with the range image sensor at adjacent positions that are in a constant positional relationship to each of the N imaging positions, and the data generator generates the plurality of composite data pieces from a plurality of different combinations of N image sets, and each of the N image sets includes a range image captured at one of the N imaging positions and a range image captured at an adjacent position of the adjacent positions for the one of the N imaging positions.” (See page 8 column 1 paragraph 2-5 “To sample new 3D candidate positions, we take the first 3D position from the exploration queue and generate six new positions by adding an offset in the −x, +x, −y, +y, −z, and +z directions, respectively. The resulting positions are discarded if they are too close to existing viewpoint candidates or do not lie in free space; otherwise they are added to C and to the exploration queue.” The system adds viewpoints used for acquiring images and these are adjacent and based on a constant positional relationship. See page 5 section 3 System overview paragraph 2 “In addition to the initial reconstruction, we compute a volumetric occupancy map containing occupied, free-space, and unobserved voxels. Each voxel also carries with it a measure of observation quality.” A dense point cloud (contains a plurality of composite data pieces) is created from a set of initial images which utilizes voxels. A voxel can represent a point in the dense point cloud. It is also used in equations 1-8 which show the use of calculated viewpoints with added viewpoints. See also page 8 section 4.4.1 most importantly equation 7 and column 2 paragraph 1 along with fig. 2. Hepp ) As per claim 13, Hepp in view of Martins teaches “The information processing system according claim 1, wherein the information processing system includes the measurement apparatus, and the measurement apparatus obtains, as a result of the measurement of the object, composite data of the M range images of the object captured by the range image sensor at the M imaging positions.” (See fig. 2 and page 5 section 3 System Overview column 1 penultimate paragraph “In addition to the initial reconstruction, we compute a volumetric occupancy map containing occupied, free-space, and un observed voxels. Each voxel also carries with it a measure of observation quality.” and column 2 paragraph 3 “Figure 2(c) shows the output of our planning method, where viewpoints that were added due to their contributed information are rendered in blue. Additional viewpoints that were added to ensure that the SfM & MVS backend can register all images into a single reconstruction are rendered in cyan. The edges are color coded to signal MAV progress along the path. The plan is then executed by the drone, and the acquired images are used to update the 3Dmodel(Figure 2(d)).” Each viewpoint represents an imaging position used to acquire an image. Fig. 2d shows the final 3d reconstructed model which uses the SfM and MVS pipeline. See also page 6 section 4.1 Optimizing Viewpoint Trajectories equations 1-6 and page 8 equation 7. Examiner interprets “composite data” as any data used to represent a voxel/point in the 3d model. This is all used for the optimized subset of viewpoints as seen on page 6 section 4.1 Optimizing viewpoint trajectories. Hepp ) Pertinent Prior Art Yoshikawa et. al. (US 20190318498 A1), discloses finding optimal viewpoints to measure an object surface (Paragraphs 102, 103 and 108), but does not disclose the dependent claims. Tomomi et. al. (JP2015178984A), discloses measurement accuracy for a 3d dimensional model. (See abstract and page 2 paragraphs 1-4), but does not disclose the dependent claims. Allowable Subject Matter Claims 6, 8-10 and 12 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DYLAN J MENDEZ MUNIZ whose telephone number is (703)756-5672. The examiner can normally be reached M-F, 8AM - 5PM ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Moyer can be reached at (571) 272-9523. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DYLAN JOHN MENDEZ MUNIZ/Examiner, Art Unit 2675 /ANDREW M MOYER/Supervisory Patent Examiner, Art Unit 2675
Read full office action

Prosecution Timeline

Nov 15, 2023
Application Filed
Feb 07, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597231
INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12573053
Image Shadow Detection Method and System, and Image Segmentation Device and Readable Storage Medium
2y 5m to grant Granted Mar 10, 2026
Patent 12573040
IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD
2y 5m to grant Granted Mar 10, 2026
Patent 12567127
MEDICAL USE IMAGE PROCESSING METHOD, MEDICAL USE IMAGE PROCESSING PROGRAM, MEDICAL USE IMAGE PROCESSING DEVICE, AND LEARNING METHOD
2y 5m to grant Granted Mar 03, 2026
Patent 12555175
METHOD FOR EMBEDDING INFORMATION IN A DECORATIVE LABEL
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
99%
With Interview (+25.0%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 18 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month