Prosecution Insights
Last updated: April 19, 2026
Application No. 17/661,867

SYSTEM TO FACILITATE PURCHASING

Non-Final OA §101§103
Filed
May 03, 2022
Examiner
KIYABU, KARIN A
Art Unit
2626
Tech Center
2600 — Communications
Assignee
Evrdrive Inc.
OA Round
5 (Non-Final)
57%
Grant Probability
Moderate
5-6
OA Rounds
3y 1m
To Grant
97%
With Interview

Examiner Intelligence

Grants 57% of resolved cases
57%
Career Allow Rate
213 granted / 373 resolved
-4.9% vs TC avg
Strong +40% interview lift
Without
With
+39.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
18 currently pending
Career history
391
Total Applications
across all art units

Statute-Specific Performance

§101
2.9%
-37.1% vs TC avg
§103
66.5%
+26.5% vs TC avg
§102
14.5%
-25.5% vs TC avg
§112
12.0%
-28.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 373 resolved cases

Office Action

§101 §103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This is in reply to a Request for Continued Examination filed on September 29, 2025 regarding Application No. 17/661,867. Applicants amended claim 1. Claims 1-4 are pending. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicants’ submission filed on September 29, 2025 has been entered. Response to Arguments Applicants’ arguments filed on September 29, 2025 have been fully considered but they are not persuasive. In response to the remark regarding claims 1-4 and unpatentable subject matter (p. 4), please see the 35 U.S.C. 101 rejections discussed below. In response to the argument regarding newly amended independent claim 1, the cited references, “a processor storing instructions executable by a processor to render the plurality of captured images and virtual images and at least one interlaced image in an image sequence utilizing a peppers ghost projector, the image sequence corresponding to detected device inputs” (p. 5; note: although the argument appears to be directed to the rejection of independent claim 1 based on Sallent Puigcercos in US 2015/0009413 A1, Thomas in US 2016/0306323 A1, and Garcia, III et al. in US 2016/0381323 A1 (see pp. 4-5), independent claim 1 was also rejected under Vats in US 2017/0103584 A1, O’Connell in US 2024/0075402 A1, Park in KR 10-2019-0084610 A, and Bradski et al. in US 2019/0094981 A1 (see pages 13-17 of the March 31, 2025 final Office action) and is therefore addressed herein), the Office respectfully disagrees and submits that all features of newly amended independent claim 1 are taught and/or suggested by the cited references. For example, figures 15(a)-17(b) and paragraphs [0086] and [0095] of Vats teach: a processor-implemented method comprising: providing a processor 1604; providing a computer-readable memory 1605 storing instructions 1607 executable by the processor 1604, wherein the instructions 1607 comprise instructions to: render an image sequence corresponding to 1503-1505 utilizing a peppers ghost projector. Also, paragraphs [0002], [0081], and [0223] of O’Connell teach: render the plurality of captured images and virtual images and at least one interlaced image in an image sequence utilizing a peppers ghost projector, the image sequence corresponding to detected camera device inputs. Thus, Vats as modified by O’Connell teaches and/or suggests the recited features. In response to the arguments regarding claim 1, cited art, dependent claims 2-4, and allowed (pp. 5-6), the Office respectfully disagrees and submits that all features of newly amended independent claim 1 are taught and/or suggested by the cited references, as discussed above and in the rejections. As such, newly amended independent claim 1 is not allowable. In addition, claims 2-4 are not allowable by virtue of their individual dependencies from newly amended independent claim 1, and as discussed in the rejections. For the reasons discussed above and in the rejections, pending claims 1-4 are not allowable. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim(s) recite(s) steps for a method for selling a vehicle, which is a marketing or sales activity and thus grouped as a certain method of organizing human activity. The process of render an image sequence of a vehicle, enable a buyer to manipulate the image sequence, and provide the buyer with a price for the vehicle all describe the abstract idea. Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application because the additional element of a processor used to implement the method for selling a vehicle is recited at a high-level of generality (i.e., as a generic processor performing a generic computer function of executing computer-readable memory storing instructions to: render an image sequence of a vehicle; enable a buyer to manipulate the image sequence; and provide the buyer with a price for the vehicle) such that it amounts to no more than merely applying the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. This judicial exception is also not integrated into a practical application because the additional elements of receive images, designate images, generate a data set by interpolating, generate virtual images, and generate interlaced images amount to no more than an insignificant extra solution activity as data gathering and compiling, mental processes, a mathematical concept, and mere data output recited at a high level of generality and thus also insignificant extra solution activity, respectively. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. In addition, this judicial exception is not integrated into a practical application because the additional element of utilizing a peppers ghost projector (which is understood in the art as a projector that displays a virtual image created by a projected source image reflected from a partially transparent screen at a 45° angle) to render the image sequence of the vehicle amounts to no more than using the peppers ghost projector as a tool to perform the existing process of rendering an image sequence of a vehicle. Accordingly, a peppers ghost projector does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because, as discussed with respect to integration of the abstract idea into a practical application, the additional elements of using a processor to implement the method for selling a vehicle, receive images, designate images, generate a data set by interpolating, generate virtual images, generate interlaced images, and using a peppers ghost projector to render the image sequence of the vehicle amount to no more than using a generic computer component to implement the exception, insignificant extra solution activity as data gathering and compiling, mental processes, a mathematical concept, mere data output recited at a high level of generality and thus also insignificant extra solution activity, and using a peppers ghost projector as a tool to perform the existing process of displaying an image sequence of a vehicle, respectively. Merely using a generic computer component to implement the exception, insignificant extra solution activity, mental processes, a mathematical concept, and merely using a peppers ghost projector as a tool to implement the abstract idea cannot provide an inventive concept. The claim is not patent eligible. Claim 2-4 are rejected under 35 U.S.C. 101 as being dependent upon rejected base claim 1. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or non-obviousness. Claims 1-2 are rejected under 35 U.S.C. 103 as being unpatentable over Vats in US 2017/0103584 A1 (hereinafter Vats) in view of O’Connell in US 2024/0075402 A1 (hereinafter O’Connell), in further view of Park in KR 10-2019-0084610 A (hereinafter Park; an original copy and full machine translation with Description page numbers added thereof was provided with the Office action mailed on August 22, 2024). Note: data storage 1605 is misidentified as element “1505” in l. 4 of [0095] of Vats – see FIG. 16, [0088], and [0094] Regarding claim 1, Vats teaches : A processor-implemented method comprising (Vats: processor 1604 and method corresponding to FIGs. 15(a)-(c); FIGs. 15(a)-(c) and 16, “[0086] FIG. 15(a) shows a display system 1502 made of multiple display based on pepper's ghost technique. It is showing bike 1501. User see the bike from different positions 1503, 1504 and 1505. FIG. 15(b) show the display system 1502 is connected to the output and showing bike 1501. FIG. 15(c) show that the display system 1501 show different face of bike in different display 1507, 1506 and 1508 giving an illusion of a 3d bike standing at one position showing different face from different side.”, and “[0095] In general, processor 1604 may be capable of executing program instructions 1607… stored in data storage 1505 to carry out the various functions described herein. Therefore, data storage 1605 may include a non-transitory computer-readable medium, having stored thereon program instructions that, upon execution by client device 1612, cause client device 1612 to carry out any of the methods, processes, or functions disclosed in this specification and/or the accompanying drawings. The execution of program instructions 1607 by processor 1604 may result in processor 1604 using data 1606.”, see also FIGs. 17(a)-(b)): providing a processor (1604 in FIG. 16) (Vats: FIG. 16 and [0095]); providing a computer-readable memory (1605 in FIG. 16) storing instructions (1607 in FIG. 16) executable by the processor, wherein the instructions comprise instructions to (Vats: FIG. 16 and [0095]): receive, a plurality of images of a vehicle (bike vehicle in FIGs. 15(a)-(c)) (Vats: see FIGs. 15(a)-(c) and “[0086] FIG. 15(a) shows a display system 1502 made of multiple display based on pepper's ghost technique. It is showing bike 1501. User see the bike from different positions 1503, 104 and 1505. FIG. 15(b) show the display system 1502 is connected to the output and showing bike 1501. FIG. 15(c) show that the display system 1501 show different face of bike in different display 1507, 1506 and 1508 giving an illusion of a 3d bike standing at one position showing different face from different side.”); render an image sequence (corresponding to 1503-1505 in FIGs. 15(a)-(c)) utilizing a peppers ghost projector (Vats: see FIGs. 15(a)-(c) and “[0086] FIG. 15(a) shows a display system 1502 made of multiple display based on pepper's ghost technique. It is showing bike 1501. User see the bike from different positions 1503, 1504 and 1505. FIG. 15(b) show the display system 1502 is connected to the output and showing bike 1501. FIG. 15(c) show that the display system 1501 show different face of bike in different display 1507, 1506 and 1508 giving an illusion of a 3d bike standing at one position showing different face from different side.”). However, it is noted that Vats, as particularly cited, does not teach: said plurality of images of a vehicle is a plurality of captured images of said vehicle; designate a first image from the plurality of captured images as a first reference frame, the first reference frame corresponding to a first data set; designate a second image from the plurality of captured images as a second reference frame, the second reference frame corresponding to a second data set; generate a third data set by interpolating data from the first data set with data from the second data set; generate one or more virtual images corresponding to the third data set; generate one or more interlaced images by interlacing one or more captured images with one or more virtual images; said render is render the plurality of captured images and virtual images and at least one interlaced image in said image sequence utilizing said peppers ghost projector, the image sequence corresponding to detected device inputs. O’Connell teaches: receive, a plurality of captured images of a subject (O’Connell: see “[0218] An aspect of the invention provides a method of filming a subject to be projected as a Peppers Ghost and/or AR image….” and “[0223] The lighting may be measured by a 360 degrees camera device… capable of capturing and accurately stitching 360 degrees images in RAW and/or DNG format….”); designate a first image (i.e., of stitched images) from the plurality of captured images as a first reference frame, the first reference frame corresponding to a first data set (corresponding to the first image) (O’Connell: see “[0002] This disclosure relates to filming and displaying peppers ghost images or similar 3D or hologram images.“ and “[0223] The lighting may be measured by a 360 degrees camera device… capable of capturing and accurately stitching 360 degrees images in RAW and/or DNG format….”); designate a second image (i.e., a second image, adjacent to the first image, of stitched images) from the plurality of captured images as a second reference frame, the second reference frame corresponding to a second data set (corresponding to the second image) (O’Connell: see “[0002] This disclosure relates to filming and displaying peppers ghost images or similar 3D or hologram images.“ and “[0223] The lighting may be measured by a 360 degrees camera device… capable of capturing and accurately stitching 360 degrees images in RAW and/or DNG format….”); generate a third data set (i.e., corresponding to stitching the first and second images) by interpolating data from the first data set with data from the second data set (O’Connell: see “[0002] This disclosure relates to filming and displaying peppers ghost images or similar 3D or hologram images.“ and “[0223] The lighting may be measured by a 360 degrees camera device… capable of capturing and accurately stitching 360 degrees images in RAW and/or DNG format….”); generate one or more virtual images corresponding to the third data set (O’Connell: i.e., corresponding to stitched images; see “[0002] This disclosure relates to filming and displaying peppers ghost images or similar 3D or hologram images.“ and “[0223] The lighting may be measured by a 360 degrees camera device… capable of capturing and accurately stitching 360 degrees images in RAW and/or DNG format….”); generate one or more interlaced images by interlacing one or more captured images with one or more virtual images (O’Connell: see “[0002] This disclosure relates to filming and displaying peppers ghost images or similar 3D or hologram images.“ and “[0223] The lighting may be measured by a 360 degrees camera device… capable of capturing and accurately stitching 360 degrees images in RAW and/or DNG format….”); render the plurality of captured images and virtual images and at least one interlaced image in an image sequence utilizing a peppers ghost projector, the image sequence corresponding to detected device inputs (detected camera device inputs) (O’Connell: see “[0002] This disclosure relates to filming and displaying peppers ghost images or similar 3D or hologram images.“, “[0081]…The method may comprise projecting the film such that the Peppers Ghost image of the subject appears the same height as the subject in real-life.”, and “[0223] The lighting may be measured by a 360 degrees camera device… capable of capturing and accurately stitching 360 degrees images in RAW and/or DNG format….”). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to include: the features taught by O’Connell, such that Vats as modified teaches: the claimed features including receive, a plurality of captured images of a vehicle (receive as taught by Vats combined with receive as taught by O’Connell – i.e., receive, a plurality of images of a vehicle of Vats, where the plurality of images is a plurality of captured images taught by O’Connell), to provide peppers ghost projection. However, it is noted that Vats as modified by O’Connell, as particularly cited, does not teach: enable a buyer to manipulate the image sequence; and provide the buyer with a price for the vehicle. Park teaches: enable a buyer (consumer) to manipulate an image sequence (Park: see FIG. 5A (rightmost figure) and p. 5, ¶ 4 (“… Referring to FIG. 5A,.. when the consumer moves the white vehicle of the corresponding model displayed on the screen back and forth or left and right, the user interface unit (410) can move the virtual reality, augmented reality, or holographic image in response.”), see also p. 6, ¶ 3 (“A purchase button may be included on the screen of the kiosk (650). When a consumer inputs the purchase button, the vehicle display system (200) may connect a counselor stationed in multiple spaces to respond to the consumer.”)); and provide the buyer with a price for a vehicle (Park: p. 4, ¶ 5 (“… [T]he user interface unit (410) can transmit and receive information related to vehicle exhibition and purchase. Here, the information related to vehicle exhibition and purchase can include… an estimated purchase price of the vehicle….”)). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to include: the features taught by Park, such that Vats as modified teaches: enable a buyer to manipulate the image sequence (image sequence of Vats as modified combined with enable and manipulate as taught by Park); and provide the buyer with a price for the vehicle (vehicle of Vats as modified combined with provide as taught by Park), to provide a buyer with vehicle angle view and price information as taught by Parks. Regarding claim 2, Vats as modified by O’Connell and Park teaches: The processor-implemented method of claim 1, wherein the image sequence comprises a photomontage of the vehicle (O’Connell: a photomontage of a subject; see “[0218] An aspect of the invention provides a method of filming a subject to be projected as a Peppers Ghost and/or AR image….” and “[0223] The lighting may be measured by a 360 degrees camera device… capable of capturing and accurately stitching 360 degrees images in RAW and/or DNG format….”, see also “[0002] This disclosure relates to filming and displaying peppers ghost images or similar 3D or hologram images.“; claim 1 above (vehicle)). Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Vats in view of O’Connell, in further view of Park, and in further view of Garcia, III et al. in US 2016/0381323 A1 (hereinafter Garcia). Regarding claim 3, Vats as modified by O’Connell and Park teaches: The processor-implemented method of claim 1. However, it is noted that Vats as modified by O’Connell and Park, as particularly cited, does not teach: wherein the buyer manipulates the image sequence utilizing a mouse. Garcia teaches: wherein a buyer (customer) manipulates an image sequence (corresponding to selecting the exterior view, interior view, and/or magnification buttons) utilizing a mouse (Garcia: FIGs. 9-11 and 18-20, “[0076]… [In FIG. 9,] [t]he Spinner may enable the customer to rotate the vehicle, as well as view it in any one of three modes: exterior-doors-closed, exterior-doors-open, and interior. The Spinner may create the illusion of three dimensions in the exterior by condensing a series of two-dimensional images along a single X and Y plane, the order of the photos placed in such a way that moving in one direction on the x axis may make it appear as if the vehicle is rotating. For the interior model, a two-dimensional panoramic image in the format of a Mercator projection may be converted from a photograph measured in pixels to a Canvas measured in degrees, reformatting the image onto a surface which may behave like a sphere but is in fact still only exists in three dimensions.”, “[0077]… [T]he Spinner application may display the front image of a vehicle. Important pieces of information about the vehicle, such as the… sales price… may be displayed consistently in the upper left hand corner…. Along the right side of the viewer, moving from top to bottom may be the buttons to select this exterior view, the exterior view with all doors, windows, and other hatches open, the interior view, the top-down view, and a tab to modify the magnification. Magnification may also be adjusted directly from the input of certain devices, such as the scrolling wheel on the top of a computer mouse. Holding the left mouse button and dragging the mouse may cause the image to move. The illusion of a real three-dimensional vehicle may rotate in the direction the mouse is moving, either clockwise or counterclockwise…. The customer may also exit this viewer at any time by a number of options, including selecting… the “Financing Terms” tab… or by hitting the “Buy This Car” button…. FIG. 10 illustrates a view of the right side of a virtual display of the vehicle…, and FIG. 11 illustrates a view of the rear section of the vehicle. The customer may view more angles, and there may be several more images that fill in the incremental gaps between each of the previous figures.”, and “[0083] FIG. 18 illustrates an Interior Front View. This may be the standard view presented to the customer when they enter the interior of the vehicle, which activates a specific animation procedure, which may be the same one used when transitioning into the interior during the interactive tour. There may be a slight curvature to the image. This may be due to the unique properties of the Interior Spinner, which may be a similar yet separate system from the Exterior Spinner. The interior may be constructed of a single Mercator projection applied to the two-dimensional illusion of the inside surface of a sphere. The same user interface features that are present in the Exterior Spinner may be available here. The only addition here may be that the customer may be able to move vertically as well as horizontally over the simulated three-dimensional environment…. FIG. 19 illustrates an example of an Interior Annotation-Feature. The user interface may be constructed as viewing a feature in the exterior model. FIG. 20 illustrates an Interior Rear View. This may be the view for the customer looking out the back of the vehicle from the inside.”, see also FIGs. 12-14). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to include: the features taught by Garcia, such that Vats as modified teaches: wherein the buyer manipulates the image sequence utilizing a mouse (buyer and manipulate the image sequence taught by Vats as modified combined with the buyer manipulates an image sequence and mouse of Garcia – i.e., manipulates of Vats as modified where manipulates is as taught by Garcia), so a customer can manipulate the image sequence using a common input device. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Vats in view of O’Connell, in further view of Park, and in further view of Vai et al. in US 2020/0192466 A1 (hereinafter Vai). Regarding claim 4, Vats as modified by O’Connell and Park teaches: The processor-implemented method of claim 1. However, it is noted that Vats as modified by O’Connell and Park, as particularly cited, does not teach: wherein the buyer manipulates the image sequence utilizing haptic sensors. Vai teaches: wherein a user (U in FIG. 2) manipulates an image (projection surface 3 image) utilizing haptic sensors (7 in FIG. 2) (Vai: FIGs. 2 and 4, “[0039] At least one sensor 7 may be a first vibration sensor which can detect the vibrations generated by a part of the body of a user that contacts the projection surface 3. The first vibration sensor can detect the vibrations transmitted to the projection surface 3, due to the contact of said projection surface 3 with the user U, to detect the position of a part of the user body on said surface projection and determine a respective command given by a user. For example, said first vibration sensor can detect a simple touch or drag of a user's U finger on the projection surface 3. For example, the vibrations detected by said first vibration sensor can be used in “touch recognition” algorithms to detect a touch or drag of a finger on the projection surface 3.”, and “[0047] To interact with the virtual panel, for example to choose the destination or to change the information to be displayed, the user can place his/her hand/finger on the desired part of the projection surface 3 (see FIG. 4). The… first vibration sensor, can transform… the vibration signal of the virtual panel, with respect to the user's hand, into electrical signals which will be sent to the control unit 9. The algorithms inside the control unit will understand if and which virtual switch 11 of the projected image has been chosen by the user and, consequently, can perform the related operations, comprising editing the image to be projected to display another menu.”). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to include: the feature taught by Vai, such that Vai as modified teaches: wherein the buyer manipulates the image sequence utilizing haptic sensors (buyer and manipulate the image sequence taught by Vats as modified combined with the buyer manipulates an image sequence and haptic sensors of Vai – i.e., manipulates of Vats as modified where manipulates is as taught by Vai), to enable input via haptic sensors. Conclusion In the response to this Office action, it is requested that support be shown for language added to any claims on amendment and any new claims by specifically citing to page(s) and line number(s) and/or paragraph(s) in the specification and/or drawing figure(s) to assist in prosecution of the application. Any inquiry concerning this communication or earlier communications from the examiner should be directed to K. Kiyabu whose telephone number is (571) 270-7836. The examiner can normally be reached Monday to Thursday 9:00 A.M. - 5:00 P.M. EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Temesghen Ghebretinsae, can be reached at (571) 272-3017. The fax number for the organization where this application or proceeding is assigned is (571) 273-8300. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicants are encouraged to use the USPTO Automated Interview Request (AIR) at https://www.uspto.gov/patents/uspto-automated-interview-request-air-form. Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). /K. K./ Examiner, Art Unit 2626 /TEMESGHEN GHEBRETINSAE/Supervisory Patent Examiner, Art Unit 2626 10/9/2025
Read full office action

Prosecution Timeline

May 03, 2022
Application Filed
Jun 17, 2023
Non-Final Rejection — §101, §103
Oct 26, 2023
Response Filed
Oct 26, 2023
Response after Non-Final Action
Dec 27, 2023
Final Rejection — §101, §103
Jul 03, 2024
Request for Continued Examination
Jul 08, 2024
Response after Non-Final Action
Aug 15, 2024
Non-Final Rejection — §101, §103
Feb 24, 2025
Response Filed
Mar 22, 2025
Final Rejection — §101, §103
Sep 29, 2025
Request for Continued Examination
Sep 30, 2025
Response after Non-Final Action
Oct 08, 2025
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591324
DISPLAY DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12586498
DISPLAY APPARATUS
2y 5m to grant Granted Mar 24, 2026
Patent 12585337
AUGMENTED REALITY EXPERIENCES WITH OBJECT MANIPULATION
2y 5m to grant Granted Mar 24, 2026
Patent 12578807
METHODS AND SYSTEMS FOR CORRECTING USER INPUT
2y 5m to grant Granted Mar 17, 2026
Patent 12578785
INFORMATION PROCESSING APPARATUS, METHOD FOR PROCESSING INFORMATION, AND PROGRAM
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
57%
Grant Probability
97%
With Interview (+39.8%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 373 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month