Prosecution Insights
Last updated: April 19, 2026
Application No. 17/483,573

VISUAL FEATURE TAGGING IN MULTI-VIEW INTERACTIVE DIGITAL MEDIA REPRESENTATIONS

Non-Final OA §103
Filed
Sep 23, 2021
Examiner
CONNER, SEAN M
Art Unit
2663
Tech Center
2600 — Communications
Assignee
Fyusion Inc.
OA Round
9 (Non-Final)
79%
Grant Probability
Favorable
9-10
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
357 granted / 454 resolved
+16.6% vs TC avg
Strong +27% interview lift
Without
With
+27.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
22 currently pending
Career history
476
Total Applications
across all art units

Statute-Specific Performance

§101
11.5%
-28.5% vs TC avg
§103
47.9%
+7.9% vs TC avg
§102
12.0%
-28.0% vs TC avg
§112
21.1%
-18.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 454 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. The Amendment filed 24 November 2025 (hereinafter “the Amendment”) has been entered and considered. Claims 1, 15, and 17 have been amended. Claims 1-20, all the claims pending in the application, are rejected. Response to Amendment Independent claims 1, 15, and 17 have been amended to recite, in some variation: “wherein the focused MIDMR is used to construct a sub-MIDMR , wherein the sub-MIDMR operates as a full-featured MIDMR of the visual feature, wherein the sub-MIDMR is a separate MIDMR from the first MIDMR but is linked to the first MIDMR via a tag associated with the first MIDMR”. Applicant argues that the applied art does not teach or suggest the newly-added features or previously-recited features including “wherein the feature identification message includes a request for a focused MIDMR of the first MIDMR, the request for the focused MIDMR requesting additional images or video of the visual feature associated with the object”, as recited in independent claims 1, 15, and 17, in some variation. The Examiner respectfully submits that all of the above features are taught by the applied art. Before turning to Applicant’s individual arguments, the Examiner notes that the central disagreement between Applicant’s position and that of the Examiner appears to be the manner in which the claimed term “multi-view interactive digital media representation (MIDMR)” is interpreted. Applicant contends that the “Specification explicitly defines an MIDMR” (page 7 of the Amendment). The Examiner respectfully disagrees. According to MPEP 2111.01(IV), in order “To act as their own lexicographer, the applicant must clearly set forth a special definition of a claim term in the specification that differs from the plain and ordinary meaning it would otherwise possess” (emphasis added). Each of the portions of the specification cited by the Applicant fall well short of providing a clear definition of the term: [0042]: “MIDMRs provide numerous advantages” – advantages do not qualify as a definition [0042]: “an MIDMR can include…” – open-ended language [0063]: “Navigational inputs from an input device can be used to select which images to output in a MIDMR” – open-ended language [0063]: “For example, a user can tilt a mobile device or swipe a touch screen display to select the images to output in a MIDMR” – exemplary language [0064]: “The MIDMR approach differs from rendering an object from a full 3-D model” – not a definition of the term “MIDMR” itself; rather, commentary on the general “approach” associated therewith The remainder of the disclosure also provides examples of features of an MIDMR or what might be included therein, but does so with open-ended and exemplary language. The Examiner cannot find a clear definition of the term in the disclosure. Since the Specification falls short of providing a clear definition for the term “MIDMR”, the words of the claim are given their plain meaning, per MPEP 2111: “Multi-view” – comprises 2 or more images “Interactive” – able to be interacted with by a user “Digital media representation” – describes any digital image In summary, the broadest reasonable interpretation of MIDMR is “a collection of multiple images with which a user may interact”. Turning now to Applicant’s individual arguments… On pages 7-8 of the Amendment, Applicant argues that Holzer’s disclosure of incorporating additional images into an existing surround view to make it more accurate is fundamentally different from generating a separate, independently manipulable MIDMR focused on a sub-component or visual feature within an already-complete MIDMR. In support of this assertion, Applicant points to portions of the disclosure which allegedly define MIDMR in a manner that precludes it from being interpreted as a collection of images. Firstly, as discussed above, the originally filed disclosure does not provide a clear definition of the term MIDMR; rather, the Specification uses open-ended and exemplary language when referring to an MIDMR, thus falling short of the clear definition required by MPEP 2111. That is, the broadest reasonable interpretation of MIDMR is “a collection of multiple images with which a user may interact”. Under this interpretation, Holzer’s requested additional images from different angles by a user does indeed read on the claimed focused MIDMR, contrary to Applicant’s assertions. As such, Holzer teaches the claimed features that the feature identification message includes a request for a focused MIDMR of the first MIDMR, the request for the focused MIDMR requesting additional images or video of the visual feature associated with the object, wherein the focused MIDMR is used to construct a sub-MIDMR , wherein the sub-MIDMR operates as a full-featured MIDMR of the visual feature, wherein the sub-MIDMR is a separate MIDMR from the first MIDMR ([0122-0130] discloses that an initial surround view corresponding to the claimed first MIDMR may lack views sufficient to represent the back of a hairstyle of a person or the graphics on a side of a mug, in which case the user is prompted to capture additional images from different angles to capture such features, and the additional images (claimed focused MIDMR) are incorporated into the initial surround view to create a new surround view (claimed sub-MIDMR)). Moreover, the Examiner is not restricted to this interpretation. For example, the breadth of the claim language permits an interpretation in which the new surround view generated from the additional images corresponds to the claimed focused MIDMR. Notably, Holzer discloses that “the surround view can provide a multi-view interactive digital media representation” ([0071]) that appears to include many of the characteristics of the MIDMR of the subject invention. In this case, Holzer still reads on the claimed features in question: the feature identification message includes a request for a focused MIDMR (final surround view that incorporates the additional images) of the first MIDMR (initial surround view that lacks the back of a hairstyle of a person or the graphics on a side of a mug), the request for the focused MIDMR requesting additional images or video of the visual feature associated with the object (prompt for user to provide additional images to be used in creating the final surround view), wherein the focused MIDMR is used to construct a sub-MIDMR, wherein the sub-MIDMR operates as a full-featured MIDMR of the visual feature, wherein the sub-MIDMR is a separate MIDMR from the first MIDMR ([0150-0152] discloses that content in a surround view can be automatically segmented and a new surround view (sub-MIDMR) is created including the segmented content without any background). Therefore, even if Holzer’s additional images could not correspond to the claimed focused MIDMR (which the Examiner does not acknowledge), Holzer would still read on the independent claims, as amended. Notably, in either of the above interpretations, the newly-created surround view focuses on a sub-feature of the object included in the initial surround view (e.g., the back of the hairstyle of the person or the graphics on the mug). In this way, the newly-created surround view reads on both qualifiers “focused” and “sub-”. Nothing in the current claim language precludes the above interpretations. On pages 9-10 of the Amendment, Applicant argues that the applied art does not teach or suggest the newly added features of the independent claims. In support of this assertion, Applicant points to various portions of the specification of the subject application and differentiates aspects of the disclosed invention from the applied art. However, although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Many of the features upon which applicant relies are not recited in the rejected claims: “user may effectively ‘zoom in’ on a feature of interest” “allows the user to view the feature of interest from different perspectives” “a separate sub-MIDMR of the mirror that can be independently manipulated and rotated” “the sub-MIDMR of the side-view mirror allows manipulation and viewing of the side-view mirror from angles and perspectives that could not be obtained by simply zooming into or rotating the vehicle MIDMR” If Applicant believes that the details of the disclosed invention discussed in the Amendment distinguish over the applied art, the Examiner recommends amending the independent claims to clearly recite such details. Until that time, the Examiner maintains that the applied art continues to read on the newly added limitations of the claims, as mapped below. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-4, 9-12, and 14-18 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication No. 2015/0339846 to Holzer et al. (hereinafter “Holzer”) in view of U.S. Patent Application Publication No. 2015/0193863 to Cao (hereinafter “Cao”). As to independent claim 1, Holzer discloses a method comprising: visual feature identification for a first multi-view interactive digital media representation (MIDMR) of an object, the first MIDMR including spatial information ([0139-0144] discloses identifying an object within a surround view; [0049] discloses that the surround view is a “multi-view interactive digital media representation” (MIDMR) including a plurality of images and accompanying location data captured as the capture device moves along a path around the object; see, for example, Fig. 4B in which capture device 414 moves along path 416 around object of interest car 418; [0048, 0063] disclose that the surround views are created by “analyzing the spatial relationship between multiple images and video together with the location information data” and thus include “spatial information”); identifying via a processor a visual feature in the first MIDMR of the object based at least in part on the spatial information, wherein identifying the visual feature comprises comparing the first MIDMR with a plurality of reference MIDMRs ([0177] discloses a processor 2301 “for implementing particular embodiments of the present invention”; [0139-0144] discloses identifying an object of interest across multiple views in the MIDMR based on a visual tag thereof, as selected by a user; [0168-0176] and Fig. 20 discloses comparing the first surround view with one or more stored surround views in order to match items therein); wherein comparing the first MIDMR with the plurality of reference MIDMRs comprises comparing spatial information of the first MIDMR with spatial information of the plurality of reference MIDMRs ([0168-0176] and Fig. 20 discloses comparing the first surround view with one or more stored surround views in order to match items therein, and that the comparison between the first surround view and one or more stored surround views for matching items therein may use “various criteria…such as shape, appearance, texture, and context of the object”, wherein each of these criteria constitute spatial information contained in the respective surround views); transmitting a feature identification message associated with the first MIDMR, wherein the feature identification message includes a request for a focused MIDMR of the first MIDMR, the request for the focused MIDMR requesting additional images or video of the visual feature associated with the object ([0066, 0122-0130] discloses that, “if a surround view is determined to need additional views to provide a more accurate model of the content or context, a user may be prompted to provide additional views”, wherein the prompt constitutes a transmitted message associated with the surround view; for example, if the images in the surround view are “not sufficient to allow recognition of an object of interest, then a prompt is given for the user to provide additional image(s) from different viewing angles”; similarly, if the images in the surround view are “not sufficient to distinguish the object of interest from similar but non-matching items at 1210, then a prompt is given for the user to provide additional image(s) from different viewing angles”; see fig. 12-13, wherein the additional views are necessarily associated with a particular visual feature of the object that is required to “allow a visual search query to yield more accurate results” or “allow recognition of [the] object of interest”; for example, in Fig. 13B, the prompt for additional images is requested to provide “more specific information about the graphics on the mug”, wherein the graphics are a visual feature associated with the mug; another example is found in [0125]: “a portrait of a person may not sufficiently show the person's hairstyle if only pictures are taken from the front angles. Additional pictures of the back of the person may need to be provided to determine whether the person has short hair or just a pulled-back hairstyle”, wherein the hairstyle is a visual feature associated with the person), wherein the focused MIDMR is used to construct a sub-MIDMR , wherein the sub-MIDMR operates as a full-featured MIDMR of the visual feature, wherein the sub-MIDMR is a separate MIDMR from the first MIDMR ([0066, 0122-0130] discloses that, once the additional views (e.g., claimed focused MIDMR) are received, they are “incorporated into the surround view”, thus creating a new and distinct surround view (e.g., claimed sub-MIDMR) from the original surround view (e.g., claimed first MIDMR); [0108-0113] discloses that each surround view (including the one newly-created from the additional images) “can be viewed and navigated”, thus making it fully-featured; in an alternative interpretation, the newly-created surround view which incorporates the additional images can correspond to the claimed focused MIDMR, and [0150-0152] discloses that content in a surround view can be automatically segmented and a new surround view (sub-MIDMR) is created including the segmented content without any background) but is linked to the first MIDMR via a tag associated with the first MIDMR ([0119-0120] discloses that “any two surround views” which “have some overlap in content…can be linked to one another through this overlap”; such linking pre-supposes a tag, but [0139-0144] expressly discloses that “tagging can provide identification for objects” common throughout surround views; since the first surround view and the focused and sub-surround views both include overlapping content (e.g., the mug or the person with the hairstyle), these surround views can be linked by virtue of that overlap in content). Holzer does not expressly disclose that the surround view query matching is performed in a client-server setting or that the matching involves comparison of scale information. That is, Holzer does not expressly disclose receiving via a communications interface at a server a visual feature identification request, wherein comparing the first MIDMR with the plurality of reference MIDMRs comprises comparing scale information of the first MIDMR with scale information of the plurality of reference MIDMRs or that the feature identification message is transmitted from the server via the communications interface in response to the feature identification request. Cao, like Holzer, is directed to identifying a visual feature in an image and searching for images that include similar visual features (Abstract). In particular, Cao discloses a server which receives a product search query corresponding to at least one product image, searches for product images that are similar to the at least one product image, and transmits the found product images to a user terminal in response to the query ([0081-0086] and Fig. 2). Cao discloses that the search for product images that are similar to the at least one product image comprises identifying product images that have a similarity in size to the query product image that is greater than or equal to a preset similarity threshold value ([0097-0100, 0064]). Cao further discloses that the similarity search also compares texture ([0083, 0100, 0064]) much like Holzer. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Holzer to perform the object identification process in response to a user request to a server, to perform the comparison between the first image set (first MIDMR) and the database image set (plurality of reference MIDMRs) by comparing a size (scale) and texture of objects therein, and to transmit results of the query back from the server to the user terminal, as taught by Cao, to arrive at the claimed invention discussed above. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. It is predictable that the proposed modification would have allowed a user to remotely query database MIDMRs for matching objects, thereby reducing storage requirements in the user’s client device. As to claim 2, Holzer as modified by Cao further teaches identifying the visual feature comprises processing user input that identifies a first location on the first MIDMR ([0140] of Holzer discloses that the user selects a point or region for the object tag in the MIDMR). As to claim 3, Holzer as modified by Cao further teaches that the spatial information is determined at least in part based on inertial data ([0049] of Holzer discloses that the location information can be obtained from an Inertial Measurement Unit). As to claim 4, Holzer as modified by Cao further teaches that identifying the visual feature further comprises: selecting a reference MIDMR that is similar to the first MIDMR, identifying a reference visual feature associated with the reference MIDMR, and locating the reference visual feature in the first MIDMR ([0168-0176] of Holzer discloses comparing the surround views in the first surround view with the respective surround views in one or more stored surround views, creating a ranked list of stored surround views matching the first surround view, and matching items therein). As to claim 9, Holzer as modified by Cao further teaches that the spatial information comprises depth information ([0049-0050] of Holzer discloses that the spatial information may include depth information). As to claim 10, Holzer as modified by Cao further teaches that the first MIDMR includes a plurality of different viewpoint images of the object ([0049] of Holzer discloses that the surround view is a multi-view interactive digital media representation including a plurality of images captured as the capture device moves along a path around the object (i.e., from different viewpoints)). As to claim 11, Holzer as modified by Cao further teaches the spatial information comprises three- dimensional location information ([0049, 0149] of Holzer discloses that the surround views include 3D characteristics of objects in the image data). As to claim 12, Holzer as modified by Cao further teaches that the visual feature represents a physical location on the object ([0139-0144] of Holzer discloses that a point on an object of interest may be selected as the tagged visual feature). As to claim 14, Holzer as modified by Cao further teaches that the MIDMR of the object further comprises three-dimensional shape information ([0157-0169] of Holzer discloses that the surround views include 3D shape information). Independent claim 15 recites a system comprising: a processor; and memory configured to store instructions, the instructions configured to cause the processor ([0179] and Fig. 23 of Holzer discloses a processor 2301 and memory 2303, the memory storing program instructions for execution by the processor) to perform the method steps recited in independent claim 1. Accordingly, claim 15 is rejected for reasons analogous to those discussed above in conjunction with claim 1. Claim 16 recites features nearly identical to those recited in claim 4. Accordingly, claim 16 is rejected for reasons analogous to those discussed above in conjunction with claim 4. Independent claim 17 recites one or more non-transitory computer readable media having instructions stored thereon for performing a method ([0180] of Holzer discloses machine readable media that include program instructions for performing the disclosed algorithm), the method comprising the method steps recited in independent claim 1. Accordingly, claim 17 is rejected for reasons analogous to those discussed above in conjunction with claim 1. Claim 18 recites features nearly identical to those recited in claim 3. Accordingly, claim 18 is rejected for reasons analogous to those discussed above in conjunction with claim 3. Claims 5-8 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Holzer in view of Cao and further in view of U.S. Patent Application Publication No. 2015/0242686 to Lenka et al. (hereinafter “Lenka”). As to claim 5, Holzer as modified by Cao does not expressly disclose that identifying the visual feature comprises: determining an object type associated with the object, identifying a predefined visual feature associated with the object type, and locating the predefined visual feature in the first MIDMR. Lenka, like Holzer, is directed to comparing objects in images (Abstract). Lenka discloses a trained classifier that classifies parts and subparts of an object in an image based on visual features thereof, the visual features including contours and boundaries ([0031-0034]). Lenka further discloses comparing an image of a damaged vehicle (Fig. 5) with a previous image of the vehicle when undamaged (Fig. 4) in order to assess damage ([0040]). As part of the damage assessment process, Lenka discloses identifying parts and subparts of an object in a received query image ([0045] and Fig. 9). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the proposed combination of Holzer and Cao to use a trained classifier to classify parts and subparts of an object in the query image (first MIDMR), as taught by Lenka, to arrive at the claimed invention discussed above. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. It is predictable that the proposed modification would have facilitated the assessment of damage to a vehicle ([0040] of Lenka). As to claim 6, the proposed combination of Holzer, Cao, and Lenka further teaches identifying a visual feature in a second MIDMR of the object, wherein the first MIDMR of the object represents the object at a first point in time and wherein the second MIDMR of the object represents the object at a second point in time ([0139] of Holzer discloses that the tagged feature from a first view of the surround view (MIDMR) is maintained in subsequent images of the surround view; see, for example, Fig. 4B in which capture device 414 moves along path 416 around object of interest car 418; that is, the multiple views in the surround view are captured at different points in time). As to claim 7, the proposed combination of Holzer and Cao does not expressly disclose comparing the visual feature in the first MIDMR of the object to the visual feature in the second MIDMR of the object to identify a change in the object between the first time and the second time. However, [0035-0048] of Lenka discloses comparing visual features in an image of a damaged vehicle (Fig. 5) with visual features of a previous image of the vehicle when undamaged (Fig. 4) in order to assess damage. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the proposed combination of Holzer and Cao to compare visual features between Holzer’s surround views (MIDMRs) to identify changes in an object over time, as taught by Lenka, to arrive at the claimed invention discussed above. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. It is predictable that the proposed modification would have facilitated the assessment of damage to a vehicle ([0040] of Lenka). As to claim 8, the proposed combination of Holzer, Cao and Lenka further teaches that the object is a vehicle and wherein the change in the object represents damage to the object ([0040] of Lenka discloses that the object is a vehicle and the change between images represents damage to the vehicle; the reasons for combining the references are the same as those discussed above in conjunction with claim 7). Claim 19 recites features nearly identical to those recited in claim 5. Accordingly, claim 19 is rejected for reasons analogous to those discussed above in conjunction with claim 5. Claim 20 recites features nearly identical to those recited in claim 8. Accordingly, claim 20 is rejected for reasons analogous to those discussed above in conjunction with claim 8. Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Holzer in view of Cao and further in view of “A Spatio-Temporal Pyramid Matching for Video Retrieval” by Choi et al. (hereinafter “Choi”). As to claim 13, Holzer as modified by Cao further teaches that the spatial information comprises depth information ([0049-0050] of Holzer discloses that the spatial information may include depth information). The proposed combination of Holzer and Cao does not expressly disclose that the spatial information also comprises visual flow between the different view point images. Choi, like Holzer, is directed to a retrieval system which searches relevant multi-image (video) clips based on a given query multi-image (video) clip (Abstract). In particular, Choi discloses that the optical flow in each clip is determined and compared with the optical flow of the query clip in order to find the closest matching clips (Section 3). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the proposed combination of Holzer and Cao to perform a comparison between the optical flow of the query multi-image media item and the optical flow of the database multi-image media items, as taught by Choi, to arrive at the claimed invention discussed above. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. It is predictable that the proposed modification would have improved retrieval performance by virtue of including an additional feature for comparison – namely, optical flow (Abstract of Choi). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SEAN M CONNER whose telephone number is (571)272-1486. The examiner can normally be reached 10 AM - 6 PM Monday through Friday, and some Saturday afternoons. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Greg Morse can be reached at (571) 272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SEAN M CONNER/Primary Examiner, Art Unit 2663
Read full office action

Prosecution Timeline

Sep 23, 2021
Application Filed
Sep 29, 2022
Non-Final Rejection — §103
Feb 06, 2023
Response Filed
Feb 14, 2023
Applicant Interview (Telephonic)
Feb 14, 2023
Final Rejection — §103
Apr 20, 2023
Response after Non-Final Action
Jun 21, 2023
Request for Continued Examination
Jun 27, 2023
Response after Non-Final Action
Jul 17, 2023
Non-Final Rejection — §103
Nov 20, 2023
Response Filed
Dec 01, 2023
Final Rejection — §103
Feb 29, 2024
Response after Non-Final Action
Mar 29, 2024
Request for Continued Examination
Apr 02, 2024
Response after Non-Final Action
Apr 05, 2024
Non-Final Rejection — §103
Jul 02, 2024
Response after Non-Final Action
Jul 02, 2024
Response Filed
Oct 01, 2024
Response Filed
Jan 20, 2025
Final Rejection — §103
Mar 24, 2025
Response after Non-Final Action
Apr 24, 2025
Request for Continued Examination
Apr 25, 2025
Response after Non-Final Action
May 03, 2025
Non-Final Rejection — §103
Aug 08, 2025
Response Filed
Aug 20, 2025
Final Rejection — §103
Oct 23, 2025
Response after Non-Final Action
Nov 24, 2025
Request for Continued Examination
Dec 01, 2025
Response after Non-Final Action
Dec 31, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586374
MULTIMODAL VIDEO SUMMARIZATION
2y 5m to grant Granted Mar 24, 2026
Patent 12586412
USING TWO-DIMENSIONAL IMAGES AND MACHINE LEARNING TO IDENTIFY INFORMATION PERTAINING TO EYE SHAPE
2y 5m to grant Granted Mar 24, 2026
Patent 12585862
Training Data for Training Artificial Intelligence Agents to Automate Multimodal Software Usage
2y 5m to grant Granted Mar 24, 2026
Patent 12579778
Pattern Matching Device, Pattern Measuring System, Pattern Matching Program
2y 5m to grant Granted Mar 17, 2026
Patent 12573180
COLLECTION OF IMAGE DATA FOR USE IN TRAINING A MACHINE-LEARNING MODEL
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

9-10
Expected OA Rounds
79%
Grant Probability
99%
With Interview (+27.1%)
2y 9m
Median Time to Grant
High
PTA Risk
Based on 454 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month