Prosecution Insights
Last updated: April 19, 2026
Application No. 18/739,360

THREE-DIMENSIONAL MODEL GENERATION DEVICE, THREE-DIMENSIONAL MODEL GENERATION METHOD, AND NON-TRANSITORY STORAGE MEDIUM

Non-Final OA §101
Filed
Jun 11, 2024
Examiner
MCDOWELL, JR, MAURICE L
Art Unit
2612
Tech Center
2600 — Communications
Assignee
Jvckenwood Corporation
OA Round
1 (Non-Final)
86%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
790 granted / 913 resolved
+24.5% vs TC avg
Moderate +13% lift
Without
With
+12.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
23 currently pending
Career history
936
Total Applications
across all art units

Statute-Specific Performance

§101
16.1%
-23.9% vs TC avg
§103
47.7%
+7.7% vs TC avg
§102
12.8%
-27.2% vs TC avg
§112
7.7%
-32.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 913 resolved cases

Office Action

§101
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. The following title is suggested: THREE-DIMENSIONAL MODEL GENERATION DEVICE, THREE-DIMENSIONAL MODEL GENERATION METHOD, AND NON-TRANSITORY STORAGE MEDIUM TO DETECT AN OPTICAL SURFACE REGION IN MULTIPLE CAPTURED IMAGES Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: imager, optical surface detection unit, model generation unit and color attribute detection unit in claim 1; area detection unit in claim 4. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-5 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because claim 1 is directed to: A three-dimensional model generation device comprising the steps of: acquire, detect, arrange, detect, generate, arrange and arrange which are nothing more than software instructions. Software instructions are non-statutory under 35 U.S.C. 101. Claims 2-3 depend from claim 1 and expound on the steps of claim 1, claim 4 includes additional steps of detect and arrange; therefore claims 2-4 are rejected under the same rationale of claim 1. Claim 5 is directed to: A three-dimensional model generation method comprising the steps of: acquiring, detecting, arranging, detecting, arranging and arranging; therefore claim 5 has the same problem as claim 1 and is rejected under the same rationale. Allowable Subject Matter Claim 6 is allowed. The following is an examiner’s statement of reasons for allowance: Regarding claim 6, the prior art doesn’t teach: generating, in a case in which the color attribute of the optical surface region has a predetermined tendency, the mask having a color attribute corresponding to the color attribute of the optical surface region and arranging the generated mask; arranging, in a case in which the color attribute of the optical surface region does not have a predetermined tendency and the optical surface region is a reflective surface region in which a reflected image visually recognized by reflection of light is observed, the mask corresponding to the reflective surface region; and arranging, in a case in which the color attribute of the optical surface region does not have a predetermined tendency and the optical surface region is a transmissive surface region in which a transmitted visual object visually recognized through a transparent member is observed, the mask corresponding to the transmissive surface region. Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Statement of Reasons for Allowance.” Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. CHUAH (US10,937,247B1) discloses systems and methods related to an image capture process using ring paths may include traversing a user device around a ring path in a center of a room, capturing imaging data using the user device during the traversal, and processing the imaging data using photogrammetry. The imaging data may be captured using an imaging sensor associated with the user device, and the imaging data may be processed based on data received from position and orientation sensors associated with the user device. In addition, a three-dimensional model of the room may be generated based on the imaging data; TERANISHI (US2024/0296621A1) discloses a three-dimensional model generation method includes: obtaining subject information including a plurality of positions on a subject in a three-dimensional space; obtaining a first camera image of the subject from a first viewpoint and a second camera image of the subject from a second viewpoint; determining a search range in the three-dimensional space, including a first three-dimensional point on the subject corresponding to a first point in the first camera image, based on the subject information and without using map information that is generated by camera calibration executed by shooting the subject from a plurality of viewpoints and includes three-dimensional points each indicating a position on the subject in the three-dimensional space; searching for a similar point that is similar to the first point, in a range in the second camera image corresponding to the search range; and a generating three-dimensional model using the search result; FALSTRUP (US 10,453,262 B1) discloses an apparatus and method for the creation of dynamically reflecting car mirrors in a virtual application using a proprietary layered panorama method. This approach utilizes the panoramic background as a source image for the reflection and specially produced masks for the mirrors in each stereo panoramic vehicle image; YANG (CN111339917A) discloses a method of real scene under the glass detection, belonging to the field of object detection. The invention is designed based on a LCFI block combination mode to efficientlyintegrate the context features of different scale, successfully detecting the glass with different sizes. Finally, the plurality of LCFI combination block embedded in the glass detection network (GDNet) so as to obtain large scale context characteristic of different levels, so as to realize thereliable accurate glass detection under all kinds of scenes; WHELAN et. al., “Reconstructing Scenes with Mirror and Glass Surfaces”, ACM Trans. Graph. 37, 4, Article 102 (August 2018), 11 pages. https://doi.org/10.1145/3197517.3201319, discloses planar reflective surfaces such as glass and mirrors are notoriously hard to reconstruct for most current 3D scanning techniques. When treated naively, they introduce duplicate scene structures, effectively destroying the reconstructionaltogether. Our key insight is that an easy to identify structure attached to the scannerDin our case an AprilTagDcan yield reliable information about the existence and the geometry of glass and mirror surfaces in a scene. We introduce a fully automatic pipeline that allows us to reconstruct the geometry and extent of planar glass and mirror surfaces while being able to distinguish between the two. Furthermore, our system can automatically segment observations of multiple reflective surfaces in a scene based on their estimated planes and locations. In the proposed setup, minimal additional hardware is needed to create high-quality results.We demonstrate this using reconstructions of several scenes with a variety of real mirrors and glass; TORRES-GOMEZ et. al., “Recognition and Reconstruction of Transparent Objects for Augmented Reality”, IEEE International Symposium on Mixed and Augmented Reality 2014 Science and Technology Proceedings 10-12 September 2014, Munich, Germany 978-1-4799-6184-9/13/$31.00 ©2014 IEEE, discloses dealing with real transparent objects for AR is challenging due to their lack of texture and visual features as well as the drastic changes in appearance as the background, illumination and camera pose change. The few existing methods for glass object detection usually require a carefully controlled environment, specialized illumination hardware or ignore information from different viewpoints. In this work, we explore the use of a learning approach for classifying transparent objects from multiple images with the aim of both discovering such objects and building a 3D reconstruction to support convincing augmentations. We extract, classify and group small image patches using a fast graph-based segmentation and employ a probabilistic formulation for aggregating spatially consistent glass regions. We demonstrate our approach via analysis of the performance of glass region detection and example 3D reconstructions that allow virtual objects to interact with them; YANG et. al., “Where is My Mirror?”, In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 8809-8818), discloses Mirrors are everywhere in our daily lives. Existing computer vision systems do not consider mirrors, and hence may get confused by the reflected content inside a mirror, resulting in a severe performance degradation. However, separating the real content outside a mirror from the reflected content inside it is non-trivial. The key challenge is that mirrors typically reflect contents similar to their surroundings, making it very difficult to differentiate the two. In this paper, we present a novel method to segment mirrors from an input image. To the best of our knowledge, this is the first work to address the mirror segmentation problem with a computational approach. We make the following contributions. First, we construct a large-scale mirror dataset that contains mirror images with corresponding manually annotated masks. This dataset covers a variety of daily life scenes, and will be made publicly available for future research. Second, we propose a novel network, called Mirror-Net, for mirror segmentation, by modeling both semantical and low-level color/texture discontinuities between the contents inside and outside of the mirrors. Third, we conduct extensive experiments to evaluate the proposed method, and show that it outperforms the carefully chosen baselines from the state-of-the-art detection and segmentation methods; LIN et. al., “Progressive Mirror Detection”, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 3697-3705, discloses the mirror detection problem is important as mirrors can affect the performances of many vision tasks. It is a difficult problem as it requires an understanding of global scene semantics. Recently, a method was proposed to detect mirrors by learning multi-level contextual contrasts between inside and outside of mirrors, which helps locate mirror edges implicitly. We observe that the content of a mirror reflects the content of its surrounding, separated by the edge of the mirror. Hence, we propose a model in this paper to progressively learn the content similarity between the inside and outside of the mirror while explicitly detecting the mirror edges. Our work has two main contributions. First, we propose a new relational contextual contrasted local (RCCL) module to extract and compare the mirror features with its corresponding context features, and an edge detection and fusion (EDF) module to learn the features of mirror edges in complex scenes via explicit supervision. Second, we construct a challenging benchmark dataset of 6,461 mirror images. Unlike the existing MSD dataset, which has limited diversity, our dataset covers a variety of scenes and is much larger in scale. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MAURICE L MCDOWELL, JR whose telephone number is (571)270-3707. The examiner can normally be reached Mon-Fri: 2pm-10pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Said A. Broome can be reached at 571-272-2931. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MAURICE L. MCDOWELL, JR/Primary Examiner, Art Unit 2612
Read full office action

Prosecution Timeline

Jun 11, 2024
Application Filed
Jan 27, 2026
Non-Final Rejection — §101 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602875
TECHNIQUE FOR THREE DIMENSIONAL (3D) HUMAN MODEL PARSING
2y 5m to grant Granted Apr 14, 2026
Patent 12602887
AUGMENTED REALITY CONTROL SURFACE
2y 5m to grant Granted Apr 14, 2026
Patent 12598281
CONTROL APPARATUS, CONTROL METHOD, AND STORAGE MEDIUM FOR DETERMINING A CAMERA PATH INDICATING A MOVEMENT PATH OF A VIRTUAL VIEWPOINT IN A THREE-DIMENSIONAL SPACE
2y 5m to grant Granted Apr 07, 2026
Patent 12579741
DETECTING THREE DIMENSIONAL (3D) CHANGES BASED ON MULTI-VIEWPOINT IMAGES
2y 5m to grant Granted Mar 17, 2026
Patent 12561905
Optimizing Generative Machine-Learned Models for Subject-Driven Text-to-3D Generation
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
86%
Grant Probability
99%
With Interview (+12.9%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 913 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month