Prosecution Insights
Last updated: April 19, 2026
Application No. 18/444,478

CONTROLLING VIRTUAL APPEARANCES IN IMAGES THAT ARE CAPTURED AND PROVIDED BY IMAGE-CAPTURING DEVICES

Final Rejection §102§103
Filed
Feb 16, 2024
Examiner
CALDERON, CYNTHIA
Art Unit
2639
Tech Center
2600 — Communications
Assignee
Elc Management LLC
OA Round
2 (Final)
77%
Grant Probability
Favorable
3-4
OA Rounds
2y 4m
To Grant
96%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
602 granted / 782 resolved
+15.0% vs TC avg
Strong +18% interview lift
Without
With
+18.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
17 currently pending
Career history
799
Total Applications
across all art units

Statute-Specific Performance

§101
4.9%
-35.1% vs TC avg
§103
42.1%
+2.1% vs TC avg
§102
30.7%
-9.3% vs TC avg
§112
11.9%
-28.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 782 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Notice of Amendments 2. The Examiner acknowledges the amended claims filed on 12/09/2025. Claim 4 has been amended. Information Disclosure Statement 3. The information disclosure statement (IDS) submitted on 12/09/2025 is in compliance with the provisions of 37 CFR 1.97 and was considered by the examiner. Response to Arguments 4. Applicant's arguments filed 12/09/2025 have been fully considered but they are not persuasive. 5. Regarding claim 1, Applicant argues that Lin does not disclose at least the element recited by independent claim 1 "responsive to the detection [under which virtual appearances of a subject person depicted in images that are provided by image-capturing devices are to be controlled]... transmit, via a wireless transceiver and to an image-capturing device, an indication of the set of controlling requirements for the virtual appearances of the subject person..."; see page 6, lines 10-17 of the Remarks. Specifically, Applicant submits that Lin discloses that the user's digital makeup enhancement data 236 can be shared with other external devices responsive to a user input. Thus, with such disclose, the Applicant indicates that Lin does not disclose at least the element recited by independent claim 1 "responsive to the detection... transmit, via a wireless transceiver and to an image- capturing device, an indication of the set of controlling requirements for the virtual appearances of the subject person..."; see from page 6, line 18 to page 7, line 16 of the Remarks. 6. In response to Applicant’s position, the examiner would like to point out that Lin discloses transmitting/sharing “the digital makeup enhancement data 236”, which corresponds to the claimed “set of controlling parameters” in response to a positive “match between the currently detected facial data and the reference facial data 238”, which corresponds to the claimed “condition under which virtual appearances of a subject person depicted in images that are provided by image- capturing devices are to be controlled”; see fig. 6 and paragraphs 0019-0023, 0031, 0053, 0063, 0076, 0084, 0100-0101 of Lin. The enhancement data 236 is not shared if the match is not confirmed. Even if Lin discloses additional steps of user interaction with the enhancement data 236 that does not take away the fact that the enhancement data 236 (claimed “set of controlling parameters”) is only transmitted once the match exists. It is noted that the claims do not preclude user input in the transmission and application of the set of controlling parameters. It is further noted that the claims recite “the set of controlling requirements specified at least in part by the subject person”, see lines 12-13 of claim 1. Therefore, the subject person in the recited claims does provide in a non-restrictive manner input for the set of controlling requirements. The claims can be amended to exclude any type of user input when transmitting and applying the set of controlling parameters assuming the Specification of the instant application provides support for such negative limitations. The claims can also be amended to indicate how exactly the subject person specifies the set of controlling parameters to clearly differentiate from the applied art. Claim Rejections - 35 USC § 102 7. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 8. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. 9. Claims 1-3, 5-17 and 20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Lin et al. (US-PGPUB 2023/0230325). Regarding claim 1, Lin discloses a system (Computing device 100/200; see figs. 1-2) for controlling virtual appearances of people depicted in images provided by image-capturing devices (Computing device uses a digital makeup enhancement module 134/234 to apply (for a live, real-time preview of) different digital/virtual makeup enhancements for different recognized user faces. Module 134/234 is integrated with camera applications 132/232; see paragraphs 0018, 0015), the system comprising: one or more sensors (Camera Devices 104/204; see figs. 1, 2 and paragraphs 0014, 0021, 0035); one or more processors (Processors 220; see fig. 2 and paragraphs 0014, 0032, 0050, 0065); and computer-executable instructions that are stored on one or more memories of the system and that are executable by the one or more processors (Information for processing stored in storage devices 250 and software executed by processors; see paragraphs 0102-0103, 0015, 0032, 0050, 0058-0059) to cause the system to: detect, via the one or more sensors, a condition (Detect a match; see fig. 6) under which virtual appearances of a subject person depicted in images that are provided by image-capturing devices are to be controlled (Output digital images that include a digital representation of a face of a user. Images provided by cameras 104/204. Receive an indication of a match between facial data associated with the digital representation of the face of the user and reference facial data associated with a digital representation of a face of an enrolled user; see steps 302, 304 and paragraphs 0021, 0063, 0084, 0100); and responsive to the detection (In response to the detected match; see fig. 6): obtain a set of controlling requirements for the virtual appearances of the subject person, the set of controlling requirements specified at least in part by the subject person (Retrieve digital makeup enhancement data 236 that is associated with reference facial data 238; see step 606 and paragraph 0101. The user selects and confirms the digital makeup enhancements; see paragraphs 0019-0020, 0076 and fig. 6), and transmit, via a wireless transceiver and to an image-capturing device (Communication units 224 of computing device 200 communicate with external devices via wireless networks by transmitting and receiving network signals; see paragraph 0053, 0023, 0031), an indication of the set of controlling requirements for the virtual appearances of the subject person so that a virtual appearance of the subject person depicted in one or more images that are provided by the image-capturing device is in accordance with the set of controlling requirements (Apply the digital makeup enhancement data to the facial data of the digital representation of the face of the user to generate and output modified digital images; see steps 608, 310; fig. 6 and paragraph 0101. The user can retrieve digital makeup enhancement data from a cloud; see paragraph 0021). Regarding claim 2, Lin discloses everything claimed as applied above (see claim 1). In addition, Lin discloses the set of controlling requirements for the virtual appearances of the subject person transmitted to the image-capturing device prevent the image- capturing device from providing any virtual appearances of the subject person that are not in accordance with the set of controlling requirements (Only the retrieved digital enhancement data selected in accordance to the detected match is applied to the digital image to generate and output the modified digital image; see steps 602-610, fig. 6 and paragraphs 0099-0101). Regarding claim 3, Lin discloses everything claimed as applied above (see claim 1). In addition, Lin discloses the one or more images that are provided by the image-capturing device are a modification, by the image-capturing device (Output the modified digital images; see step 610, fig. 6. One or more modified digital images comprise one or more modified real-time images; see paragraphs 0101, 0084), to an initial set of images that were initially captured via an image-capturing component of the image-capturing device, the initial set of images including one or more virtual appearances of the subject person (Output original images at step 602; see fig. 6. One or more real-time images captured by camera devices 204; see paragraphs 0100, 0084). Regarding claim 5, Lin discloses everything claimed as applied above (see claim 1). In addition, Lin discloses the system generates the set of controlling requirements for the virtual appearances of the subject person based on an alteration, in accordance with one or more preferences indicated by the subject person, of a source image of the subject person (The enrolled user uses the makeup objects included in graphical makeup bar 363 to select and apply any number of different digital makeup enhancements to the face of the enrolled user displayed in image 370. Digital makeup enhancement module 234 generates, based on the selected digital makeup enhancements, the corresponding digital makeup enhancement data for storage in digital makeup enhancement data 236, and associates this data with the corresponding reference facial data 238 to which the makeup enhancement data is applied; see paragraphs 0076, 0019-0020). Regarding claim 6, Lin discloses everything claimed as applied above (see claim 1). In addition, Lin discloses the set of controlling requirements for the virtual appearances of the subject person includes a respective definition or specification for a respective virtual appearance of at least one of**: a gender, a body shape, a hair style, a height, a bodily form, a modification to a first feature, a substitution for a second feature, or another aspect of the virtual appearances of the subject person (Digital makeup enhancement module 134 can use digital makeup enhancement data 136 to apply two-dimensional or three-dimensional makeup enhancements to face 143. The three-dimensional makeup rendering process can detect one or more facial landmarks in face 143 and construct a three-dimensional facial model, including a depth map model of facial information or related features, based on a source image. Module 134 can use the three-dimensional makeup rendering process to apply makeup enhancements via a blending process with respect to the pixel values of the image(s). Thus, the rendering process can take both the shape of face 143 and any effects such as depth or shadowing into account when performing the three-dimensional rendering process and applying the makeup enhancements to face 143; see paragraphs 0043). Regarding claim 7, Lin discloses everything claimed as applied above (see claim 1). In addition, Lin discloses the one or more images of the subject person provided by the image-capturing device includes a video image (Camera devices 104/204 are configured to capture one or more images during execution of a camera application including moving images. Module 134 outputs, for display, the moving images; see paragraphs 0035, 0037, 0042, 0063, 0069). Regarding claim 8, Lin discloses everything claimed as applied above (see claim 1). In addition, Lin discloses the image-capturing device is a smart glass device, an augmented reality device, a virtual reality device, or a digital camera (Camera devices 204 can be a charge-coupled device; see paragraph 0064. Digital images 360, 370, 380 are images that are captured by a camera application (e.g., one of applications 232) in real time, and these may comprise one or more still or moving images; see paragraph 0069). Regarding claim 9, Lin discloses everything claimed as applied above (see claim 1). In addition, Lin discloses the wireless transceiver (Communication unit 224 includes a wireless communication unit that communicate with external devices via one or more networks by transmitting and/or receiving network signals; see paragraph 0053) is included in a personal electronic device of the subject person (Computing device 100/200 can include, a mobile phone, a tablet computer, a personal digital assistant (PDA), a laptop computer, a portable gaming device, a portable media player, a wearable computing device; see paragraphs 0014, 0049). Regarding claim 10, Lin discloses everything claimed as applied above (see claim 9). In addition, Lin discloses the personal electronic device of the subject person is a wearable electronic device (Computing device 100/200 can include a wearable computing device (e.g., a watch, a wrist-mounted computing device, a head-mounted computing device); see paragraphs 0014, 0049). Regarding claim 11, Lin discloses everything claimed as applied above (see claim 1). In addition, Lin discloses the one or more images are provided by the image-capturing device for rendering on a display (While using camera devices 104/204, computing device 100/200 can output, for display (e.g., at display device 102/202), one or more real-time images captured by camera devices 104/204. Digital makeup enhancement module 134/234 may then apply digital makeup enhancement data 136/236 to the facial data of the digital representation of the face of the user to generate the one or more modified digital images, where the one or more modified digital images comprise one or more modified real-time images. Computing device 100/200 then outputs, for display (e.g., at display device 102/202), the one or more modified real-time images to provide a live preview of the at least one corresponding digital makeup enhancement to the digital representation of the face of the user; see paragraphs 0021, 0084, 0100, 0101). Regarding claim 12, Lin discloses everything claimed as applied above (see claim 1). In addition, Lin discloses the indication of the set of controlling requirements includes a code or a filter (Computing device 100/200 executes software to apply digital makeup enhancements (e.g., virtual makeup) to face 143. When implemented in software, the functions are stored as code on a computer-readable medium and executed by a hardware-based processing unit; see paragraphs 0015, 0032, 0102). Regarding claim 13, Lin discloses everything claimed as applied above (see claim 1). In addition, Lin discloses the set of controlling requirements is included in a plurality of different sets of controlling requirements for the virtual appearances of the subject person that are stored on the one or more memories (One or more storage devices 250 within computing device 200 stores information for processing during execution of modules 230, 232, 254 and 234; see paragraphs 0058-0060. Computing device can store customized digital makeup enhancements for various different enrolled users of computing device and it can also store different groups of customized enhancements for each individual user; see paragraphs 0022, 0048); and the system selects, based on the detected condition, the set of controlling requirements from among the plurality of different sets of controlling requirements (Retrieve digital makeup enhancement data in response to the detected match and scenario (business or party makeup data); see fig. 6, steps 602-606 and paragraphs 0100-0101, 0078, 0098). Regarding claim 14, Lin discloses everything claimed as applied above (see claim 1). In addition, Lin discloses the detected condition corresponds to at least one of**: a presence of the image-capturing device, a date, a time, a physical location, a virtual location, an event (Detected match and specific scenario such as a business event or party event makeup data; see fig. 6, steps 602-606 and paragraphs 0100-0101, 0078, 0098), or** a user instruction or command. Regarding claim 15, Lin discloses everything claimed as applied above (see claim 1). In addition, Lin discloses the one or more memories of the system include one or more remote memories, and at least a portion of the set of controlling requirements for the virtual appearances of the subject person are stored on the one or more remote memories (Storage devices 250, or one or more of components included in storage devices 250, can be stored on one or more remote computing devices that are external to computing device 200 (e.g., on one or more external servers); see paragraph 0060. The enhancements can be stored within digital makeup enhancement data 136/236 in the cloud; see paragraphs 0020, 0021, 0072, 0101). Regarding claim 16, Lin discloses everything claimed as applied above (see claim 1). In addition, Lin discloses the one or more memories of the system include one or more memories of a personal electronic device (Computing device 100/200 can include, a mobile phone, a tablet computer, a personal digital assistant (PDA), a laptop computer, a portable gaming device, a portable media player, a wearable computing device; see paragraphs 0014, 0049. The enhancements can be stored within digital makeup enhancement data 136/236 locally; see paragraph 0020) of the subject person, and at least a portion of the set of controlling requirements for the virtual appearances of the subject person are stored on the one or more memories of the personal electronic device (One or more storage devices 250 within computing device 200 stores information for processing during execution of modules 230, 232, 254 and 234; see paragraphs 0058-0060). Regarding claim 17, Lin discloses everything claimed as applied above (see claim 1). In addition, Lin discloses a user interface (UI module 130/230 causes display device 102/202 to present a graphical user interface. A user interacts with a respective user interface of each of applications 132/232 to cause computing device 100/200 to perform operations relating to corresponding application functionality; see paragraphs 0030-0033, 0096, 0098) via which a user at least one of**: at least one of defines, specifies, modifies, deletes, updates (The user can use computing device 200 to make one or more modifications or updates to the previously stored enhancements for that user before they are applied; see paragraph 0096), or selects one or more sets of controlling requirements for the virtual appearances of the subject person depicted in images (UI module 230 to output a graphical menu that includes all of the groups of enhancement settings that have been previously saved for the user, and which are currently available for application. Digital makeup enhancement module 234 can then receive a selection of one of these groups of digital makeup settings for retrieval and application to the facial data of the user; see paragraph 0098); at least one of adds, modifies, deletes, specifies, or updates respective indications of one or more conditions to which respective transmissions of the one or more sets of controlling requirements are responsive; or indicates one or more image-capturing devices to which the one or more sets of controlling requirements and updates to the one or more sets of controlling requirements are to be provided. Regarding claim 20, Lin discloses everything claimed as applied above (see claim 1). In addition, Lin discloses the set of controlling requirements for the virtual appearances of the subject person is included in a plurality of sets of controlling requirements for respective virtual appearances of multiple people stored in the one or more memories of the system (One or more storage devices 250 within computing device 200 stores information for processing during execution of modules 230, 232, 254 and 234; see paragraphs 0058-0060. Computing device can store customized digital makeup enhancements for various different enrolled users of computing device; see paragraphs 0022, 0048); and the system is further configured to obtain a respective set of controlling requirements for virtual appearances of at least one other person responsive to a corresponding detected condition, and transmit, via a respective wireless transceiver (Wireless communication unit 224; see paragraph 0053, 0023, 0031), an indication of the respective set of controlling requirements for the virtual appearances of the at least one other person to a respective image-capturing device (Retrieve digital makeup enhancement data in response to the detected match. Digital makeup enhancement module 134/234 can access customized digital makeup enhancements for various different users of computing device 100/200, thus enabling computing device 100/200 to automatically apply digital makeup enhancements to recognized faces that are included in digital images. While using camera devices 104/204, computing device 100/200 outputs, for display real-time images captured by camera devices 104/204. Module 134/234 then applies digital makeup enhancement data 136/236 to the facial data of the digital representation of the face of the user to generate the one or more modified digital images, see fig. 6, steps 602-606 and paragraphs 0100-0101, 0021-0022, 0048, 0084). Claim Rejections - 35 USC § 103 10. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 11. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 12. Claims 4 and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Lin in view of Doken (US-PGPUB 2023/0260219). Regarding claim 4, Lin discloses everything claimed as applied above (see claim 1). However, Lin fails to expressly disclose at least one of**: the transmission of the indication of the set of controlling requirements for the virtual appearances of the subject person via the wireless transceiver of the system to the image-capturing device is at least one of: encrypted, transmitted via broadcast, or transmitted via point-to-point; or the transmission of the indication of the set of controlling requirements for the virtual appearances of the subject person via the wireless transceiver of the system to the image-capturing device utilizes a peer-to-peer wireless protocol or a short-range wireless protocol. On the other hand, in a similar field of endeavor of controlling virtual appearances of captured objects, Doken discloses the transmission of the indication of the set of controlling requirements for the virtual appearances of the subject person via the wireless transceiver of the system to the image-capturing device utilizes a peer-to-peer wireless protocol or a short-range wireless protocol (In client-server-based embodiments, the control circuitry 3504 includes communications circuitry suitable for providing virtual object enhancements. The communications circuitry includes circuitry that enables peer-to-peer communication of user equipment devices, or communication of user equipment devices in locations remote from each other; see paragraph 0271. There are paths between user equipment devices, so that the devices communicate directly with each other via wireless paths (e.g., Bluetooth, infrared, IEEE 802-11x, etc.), or other short-range communication paths; see paragraph 0263). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Lin and Doken to provide the transmission of the indication of the set of controlling requirements for the virtual appearances of the subject person via the wireless transceiver of the system to the image-capturing device utilizes a peer-to-peer wireless protocol or a short-range wireless protocol for the purpose of easily facilitating a secure and flexible communication between devices through different locations. Regarding claim 18, Lin discloses everything claimed as applied above (see claim 17). However, Lin fails to expressly disclose the computer-executable instructions are further executable to cause the system to provide, via the user interface, a suggestion of at least one controlling requirement for the virtual appearances of the subject person. Nevertheless, Doken discloses the computer-executable instructions are further executable to cause the system to provide, via the user interface, a suggestion of at least one controlling requirement for the virtual appearances of the subject person (Inputs for selecting a virtual object can be based on recommendations made based on analysis performed by an artificial intelligence algorithm; see paragraphs 0071, 0053, 0103). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Lin and Doken to provide the computer-executable instructions are further executable to cause the system to provide, via the user interface, a suggestion of at least one controlling requirement for the virtual appearances of the subject person for the purpose of simplifying the virtual appearance enhancement process thus reducing the time in obtaining the modified images. Regarding claim 19, Lin and Doken disclose everything claimed as applied above (see claim 18). However, Lin fails to expressly disclose the computer-executable instructions include one or more machine-learning algorithms that are executed by the one or more processors to generate the suggestion of the at least one controlling requirement for the virtual appearances of the subject person. On the other hand, Doken discloses the computer-executable instructions include one or more machine-learning algorithms that are executed by the one or more processors to generate the suggestion of the at least one controlling requirement for the virtual appearances of the subject person (Inputs for selecting a virtual object can be based on recommendations made based on analysis performed by an artificial intelligence algorithm; see paragraphs 0071, 0053, 0103. Virtual objects are obtained based on the user's profile, which are populated by the system based on analysis performed by machine learning or artificial intelligence algorithms; see paragraphs 0103, 0115-0116, .The embodiments disclosed also utilize systems and methods for providing virtual object enhancements and invoking an artificial intelligence (AI) or machine learning (ML) algorithm to perform an analysis on any of the above-mentioned data, see paragraphs 0048). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Lin and Doken to provide the computer-executable instructions include one or more machine-learning algorithms that are executed by the one or more processors to generate the suggestion of the at least one controlling requirement for the virtual appearances of the subject person for the purpose of simplifying the virtual appearance enhancement process thus reducing the time in obtaining the modified images. **Note: The U.S. Patent and Trademark Office considers Applicant’s “or”, “one of”, and “at least one of” language to be anticipated by any reference containing one of the preceding and/or subsequent corresponding elements. Conclusion 13. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. 14. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CYNTHIA CALDERON whose telephone number is (571)270-3580. The examiner can normally be reached M-F 9:00 AM-5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, TWYLER HASKINS can be reached at (571)272-7406. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CYNTHIA CALDERON/Primary Examiner, Art Unit 2639 01/15/2026
Read full office action

Prosecution Timeline

Feb 16, 2024
Application Filed
Oct 28, 2025
Non-Final Rejection — §102, §103
Dec 09, 2025
Response Filed
Jan 15, 2026
Final Rejection — §102, §103
Apr 01, 2026
Applicant Interview (Telephonic)
Apr 01, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604088
IMAGE PICKUP APPARATUS CAPABLE OF CONTROLLING POWER SUPPLY, ITS CONTROL METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12604108
LIGHTFIELD CAMERA THAT CAN SIMULTANEOUSLY ACQUIRE 2D INFORMATION AND 3D SPATIAL INFORMATION FROM SAME DEPTH
2y 5m to grant Granted Apr 14, 2026
Patent 12598388
IMAGE CAPTURING APPARATUS AND CONTROL METHOD THEREOF
2y 5m to grant Granted Apr 07, 2026
Patent 12593120
METHOD FOR ACQUIRING A PHOTOGRAPHIC PORTRAIT OF AN INDIVIDUAL AND APPARATUS IMPLEMENTING THIS METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12587745
IMAGE CAPTURING APPARATUS AND CONTROL METHOD THEREOF
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
77%
Grant Probability
96%
With Interview (+18.5%)
2y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 782 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month