Prosecution Insights
Last updated: April 19, 2026
Application No. 18/123,833

Transposing Virtual Objects Between Viewing Arrangements

Non-Final OA §103
Filed
Mar 20, 2023
Examiner
CHEN, YU
Art Unit
2613
Tech Center
2600 — Communications
Assignee
Apple Inc.
OA Round
3 (Non-Final)
68%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
98%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
711 granted / 1052 resolved
+5.6% vs TC avg
Strong +30% interview lift
Without
With
+29.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
110 currently pending
Career history
1162
Total Applications
across all art units

Statute-Specific Performance

§101
2.2%
-37.8% vs TC avg
§103
43.9%
+3.9% vs TC avg
§102
27.0%
-13.0% vs TC avg
§112
20.7%
-19.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1052 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 08/22/2025 has been entered. Response to Amendment This is in response to applicant’s amendment/response filed on 08/22/2025, which has been entered and made of record. Claims 1, 5-7, 15, 19 and 20 have been amended. No claim has been cancelled. No claim has been added. Claims 1-20 are pending in the application. Response to Arguments Applicant’s arguments on 08/22/2025 have been fully considered but are moot because the arguments do not apply to any of the references being used in the current rejection. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al. (US Pub 2019/0340821) in view of Williams et al. (US Pub 2023/0244354 A1). As to claim 1, Chen discloses a method comprising: at a device including a display, one or more processors, and a non-transitory memory (Fig. 6): displaying multiple virtual objects in a first viewing arrangement having a two-dimensional appearance in a first region of an environment that is bounded, wherein the multiple virtual objects are arranged and displayed in a first spatial arrangement in the bounded first region as a cluster with spatial relationships (Fig. 1, Fig. 2D to 2G, ¶0025, “In the example of FIG. 1, the selectable virtual objects 132, 134 are shown initially projected on a virtual interface 120, which appears floating in space in front of the user 104.” ¶0049, “Responsive to receipt of a data type identifier, the virtual content surface re-mapper selects an expanded presentation format for the re-projection in which the items of the collection are spread out across the designated surface so as to permit a user to individually view, select, manipulate, and otherwise interact with the individual items of the collection of content. If, for example, the selected virtual object 208 is a condensed email thread, the virtual content surface re-mapper may receive a data type identifier “email thread” and a number of emails (e.g., five emails) in the thread. The virtual content surface re-mapper determines that the identifier “email stack” is associated with a rectangular content box for each email and selects a presentation format with five rectangles to be spread out across the designated surface according to an arrangement based on user attributes and/or surface.” Fig. 2E, 208 a-e on surface 220.); obtaining a user input corresponding to a request to change to a second viewing arrangement in a second region of the environment (Fig. 2A to 2G, ¶0024, ¶0026, “the virtual content surface re-mapper 122 performs coordinate remapping of user-selected virtual objects 132, 134 and communicates with the AR/VR application 114 and/or a graphics engine (not shown) to move (re-project) the user-selected virtual objects 132, 134 in three-dimensional coordinate space to place the objects on a user-selected virtual or physical surface that is external to the AR/VR application 114. For example, the user 104 may wish to move selected virtual content items to a different physical or virtual surface where the selected objects can be more easily previewed, reached, or displayed in greater detail (e.g., shown larger, shown to include content initially hidden, re-arranged in a desired way).”) determining a mapping between the first spatial arrangement and a second spatial arrangement based on the spatial relationships (Fig. 2C to 2G, ¶0027, “The virtual content surface re-mapper 122 is shown to include a surface identifier 124 and a content arranger 128 and projection mapping tool 126. Responsive to the user 104 selection of one or more virtual objects 132, 134 via inputs provided to the UI content interaction tool 116, the surface identifier 124 identifies available surfaces onto which the selected virtual objects 132, 134 may be re-projected.” ¶0043, “At the conclusion of this movement, the selected virtual object 208 (e.g., a collection of content) is projected near to the controller 204, as shown.” ¶0044, “one or more of the identified potential projection surfaces (e.g., the surfaces 218, 220) are virtual surfaces external to the application that generates the virtual interface 212 and its associated virtual objects described with respect to FIGS. 2A-2C.” ¶0046, “the virtual content surface re-mapper determines an arrangement for the selected virtual objects. In various implementations, this determination is based on a variety of inputs including, without limitation, inputs from the application that created the virtual objects (e.g., the email application) and/or other applications. For example, the application that created the virtual objects may provide the virtual content surface re-mapper with information such as the total number of selected virtual content items, shape data or other information for determining a mapping of each of the selected virtual objects in three-dimensional space, and/or the type of data (e.g., whether the selected virtual objects are text files, photos, audio).” ¶0047, “the virtual content surface re-mapper content determines a presentation format for the re-projection of each of the selected virtual objects based on a data type identifier received from the application that owns (generates) the selected virtual objects (e.g., the virtual object 208). For example, the data type identifier may indicate a type of content represented by the virtual object and/or further indicate whether each of the virtual objects include text data, audio data, imagery, etc. Based on the data type identifier, the virtual content surface re-mapper determines a general shape and layout for each individual one of the selected virtual objects on the designated surface 220” ¶0049-0050, ¶0054, ¶0057-0058, ¶0061); and displaying the multiple virtual objects in the second viewing arrangement in the second region of the environment (Fig.2E and 2G, ¶0033), wherein the objects in the multiple virtual objects are arranged and displayed as the cluster in the second viewing arrangement as the cluster with one or more of the spatial relationships preserved (Fig. 2E to Fig. 2G, ¶0058, “selected positions for individual virtual objects 208 a, 208 b, 208 c, 208 d, and 208 e included within the collection represented by the virtual object 208.” ¶0060, “the user has provided input via the controller 204 to select each of the virtual objects 208 a-208 e from the surface 220 (e.g., where 208 a-208 e are subobjects in the collection represented by the virtual object 208 in FIG. 2A, 2B, 2C, and 2E). The system 200 has condensed the virtual objects 208 a-208 e into a stack shown near to the controller 204. Again, the system presents highlights around the identified projection surfaces (e.g., the surfaces 218 and 220). The user tilts the controller 204 to highlight the surface 218 with the virtual projection beam 216 and provides further input to transmit a surface selection instruction selecting the surface 218.” ¶0061, “the system 200 projects the selected virtual objects 208 a, 208 b, 208 c, 208 d, and 208 e according to the determined arrangement onto the surface 218 that the user has selected via the surface selection instruction.”). Chen does not explicitly disclose the second viewing arrangement having a three-dimensional appearance. However, such feature is obvious in augmented reality desktop application. Williams teaches the second viewing arrangement having a three-dimensional appearance (Williams, Fig. 9A to Fig. 9E, ¶0128, “The screen 902 also displays 2D elements 904, 906, 908 that are graphics (e.g., object images) for viewing by the user. The screen 902 further displays a cursor 910, which is controllable by the user via a user input device.” ¶0129-0130, “the graphics may be an object 950 that is rendered based on the 3D model. In some embodiments, the object 950 may be a 3D version of the 2D object 906 selected by the user.” ¶0131, “after the 3D model is obtained, the user may position the object 950 (e.g., in one or more directions) and/or rotate the object 950 (e.g., about one or more axes) based on the 3D model. For example, as shown in FIGS. 9C-9D, the user may use the cursor 910 to select the object 950 and move the object 950 to a certain location with respect to the environment as viewed through the screen 902.” ¶0133, “FIG. 9E illustrates an example of a page that includes 2D elements 960, 962. In the illustrated example, the page is a web page, and the 2D elements 960, 962 are content presented in the web page. The 2D elements 960, 962 are associated with respective 3D models, such that a user can retrieve such models by selecting the corresponding 2D elements 960, 962. When one of the 2D elements 960, 962 is selected by the user, the processing unit 130 obtains the corresponding 3D model, and provides graphics based on the 3D model. In some embodiments, the graphics may be a 3D version of the selected 2D element for display by the screen 902. For example, the graphics may be an object rendered based on the 3D model. The processing unit 130 may also receive input from the user indicating a desired position to place the object (the 3D version of the selected 2D element). The desired position may indicate a location with respect to a coordinate system of the screen 902, and/or whether to place the object in front of, or behind, another object (e.g., a virtual object or a real object) as presented through the screen 902.”). Chen and Williams are considered to be analogous art because all pertain to head-worn image display devices. It would have been obvious before the effective filing date of the claimed invention to have modified Chen with the features of “the second viewing arrangement having a three-dimensional appearance” as taught by Williams. The suggestion/motivation would have been in order to select a 2D element (e.g., an image, a graphic, etc.) displayed in a page to access a 3D model, and the accessed 3D model may then be placed by a user on or near a real-world or virtual object as perceived by the user via the spatial computing environment (Williams, ¶0012). As to claim 2, claim 1 is incorporated and the combination of Chen and Williams discloses the first viewing arrangement comprises a bounded viewing arrangement (Chen, Fig. 1, Fig. 2A) As to claim 3, claim 1 is incorporated and the combination of Chen and Williams discloses the first region of the environment comprises a first two-dimensional virtual surface enclosed by a boundary (Chen, Fig. 1, Fig. 2A). As to claim 4, claim 3 is incorporated and the combination of Chen and Williams discloses the first region of the environment further comprises a second two-dimensional virtual surface substantially parallel to the first two- dimensional virtual surface (Chen, Fig. 2A, item 210, 212 and 222 are substantially parallel. Also see Fig.2B, item 210, 208, 212.). As to claim 5, claim 4 is incorporated and the combination of Chen and Williams discloses displaying the multiple virtual objects on at least one of the first two-dimensional virtual surface or the second two- dimensional virtual surface (Chen, Fig. 2A, item 210, 212 and 222 are substantially parallel. Also see Fig.2B, item 210, 208, 212.). As to claim 6, claim 1 is incorporated and the combination of Chen and Williams discloses the multiple virtual objects correspond to content items having a first characteristic (Chen, ¶0037, “A column 214 of rectangular virtual objects 206, 208 includes condensed information (e.g., subject line, sender, timestamp information) for each of several emails in a mailbox currently-selected from the navigation pane.”). As to claim 7, claim 1 is incorporated and the combination of Chen and Williams discloses the multiple virtual objects comprises: a first subset of virtual objects corresponding to content items having a first characteristic; and a second subset of virtual objects corresponding to content items having a second characteristic different from the first characteristic (Chen, ¶0047, “the virtual content surface re-mapper content determines a presentation format for the re-projection of each of the selected virtual objects based on a data type identifier received from the application that owns (generates) the selected virtual objects (e.g., the virtual object 208). For example, the data type identifier may indicate a type of content represented by the virtual object and/or further indicate whether each of the virtual objects include text data, audio data, imagery, etc. Based on the data type identifier, the virtual content surface re-mapper determines a general shape and layout for each individual one of the selected virtual objects on the designated surface 220. For example, a text file data type identifier may be pre-associated with a first defined object shape for the re-projection, while an image file may be pre-associated with a second defined object shape for the re-projection.”). As to claim 8, claim 7 is incorporated and the combination of Chen and Williams discloses displaying the first subset of virtual objects in a first area of the first region; and displaying the second subset of virtual objects in a second area of the first region (Chen, ¶0046, “the application that created the virtual objects may provide the virtual content surface re-mapper with information such as the total number of selected virtual content items, shape data or other information for determining a mapping of each of the selected virtual objects in three-dimensional space, and/or the type of data (e.g., whether the selected virtual objects are text files, photos, audio).” ¶0047, “the virtual content surface re-mapper content determines a presentation format for the re-projection of each of the selected virtual objects based on a data type identifier received from the application that owns (generates) the selected virtual objects (e.g., the virtual object 208). For example, the data type identifier may indicate a type of content represented by the virtual object and/or further indicate whether each of the virtual objects include text data, audio data, imagery, etc. Based on the data type identifier, the virtual content surface re-mapper determines a general shape and layout for each individual one of the selected virtual objects on the designated surface 220. For example, a text file data type identifier may be pre-associated with a first defined object shape for the re-projection, while an image file may be pre-associated with a second defined object shape for the re-projection.” ¶0049-0050). As to claim 9, claim 7 is incorporated and the combination of Chen and Williams discloses the first characteristic is a first media type; and the second characteristic is a second media type different from the first media type (Chen, ¶0047, “the virtual content surface re-mapper content determines a presentation format for the re-projection of each of the selected virtual objects based on a data type identifier received from the application that owns (generates) the selected virtual objects (e.g., the virtual object 208). For example, the data type identifier may indicate a type of content represented by the virtual object and/or further indicate whether each of the virtual objects include text data, audio data, imagery, etc. Based on the data type identifier, the virtual content surface re-mapper determines a general shape and layout for each individual one of the selected virtual objects on the designated surface 220. For example, a text file data type identifier may be pre-associated with a first defined object shape for the re-projection, while an image file may be pre-associated with a second defined object shape for the re-projection.” ¶0049-0050). As to claim 10, claim 7 is incorporated and the combination of Chen and Williams discloses the first characteristic is an association with a first application; and the second characteristic is an association with a second application different from the first application (Chen, ¶0046, “the virtual content surface re-mapper determines an arrangement for the selected virtual objects. In various implementations, this determination is based on a variety of inputs including, without limitation, inputs from the application that created the virtual objects (e.g., the email application) and/or other applications.”¶0048, ¶0049, “the virtual surface content re-mapper receives a data type identifier indicating that a selected virtual object includes a collection of content (e.g., that the selected virtual object 208 is a directory icon representing multiple files, a photo album including multiple photos, a playlist of audio or video data, an email thread, news stack, etc.) and also indicating the type of content in the collection. Responsive to receipt of a data type identifier, the virtual content surface re-mapper selects an expanded presentation format for the re-projection in which the items of the collection are spread out across the designated surface so as to permit a user to individually view, select, manipulate, and otherwise interact with the individual items of the collection of content.”). As to claim 11, claim 1 is incorporated and the combination of Chen and Williams discloses the user input comprises a gesture input (Chen, ¶0024, “the UI content interaction tool 116 receives imagery from a camera and uses gesture-recognition software to decipher hand gestures signifying user interactions with various virtual objects, such as gestures signifying touching, tapping, pinching, and dragging to select one or more of the virtual objects 132, 134.”). As to claim 12, claim 1 is incorporated and the combination of Chen and Williams discloses the user input comprises an audio input (Chen, ¶0024, “the UI content interaction tool 116 may detect user inputs from one or more cameras, depth sensors, microphones, or heat sensors mounted on or otherwise in communication with the processing device 102.” ¶0077, “Applications 612 may receive input from various input devices such as a microphone 634” ¶0078, “an audio interface (e.g., the microphone 634, an audio amplifier and speaker and/or audio jack)” user input from microphones suggests an audio input.). As to claim 13, claim 1 is incorporated and the combination of Chen and Williams discloses receiving the user input from a user input device (Chen, ¶0002, ¶0024). As to claim 14, claim 1 is incorporated and the combination of Chen and Williams discloses obtaining a confirmation input before determining the mapping between the first spatial arrangement and the second spatial arrangement (Chen, ¶0003, “present a surface selection prompt to a user. Responsive to receipt of a surface selection instruction received in response to the surface selection prompt, the virtual content surface re-mapper projects the one or more selected virtual objects onto a plane corresponding to a surface designated by the surface selection instruction.” ¶0030, “In some implementations, the surface identifier 124 presents a prompt that enables the user 104 to view the identified potential projection surface(s) recognized by the surface identifier 124 and/or selects a designated surface from the collection of identified potential projection surface(s).” ¶0073, ¶0082). As to claim 15, claim 1 is incorporated and the combination of Chen and Williams discloses moving an object in the multiple virtual objects in the second viewing arrangement; and returning from the second viewing arrangement to the first viewing arrangement, including displaying the object in the cluster at a different location in the first region (Chen, ¶0020, “selectively move virtual objects between different surfaces (such as virtual surfaces or real-world surfaces). With this functionality, a user can self-create multiple different virtual workspaces in a room and place content on each different workspace to view in isolation of other projected virtual content. This control over the selection of surfaces to receive projected content facilitates a more natural emotional connection to virtual content by allowing virtual objects to behave in a realistic way with other virtual objects and/or real-world surroundings. For example, a user may select a collection of documents (e.g., photos) from a virtual interface and spread those documents out across a real-world table or virtual surface external to the application to view them as if they were indeed actual physical documents.” Fig. 2E to Fig. 2G, object 208s can be moved between surface 220 and 218. See ¶0058-0061.) As to claim 16, claim 1 is incorporated and the combination of Chen and Williams discloses the second region of the environment is associated with a physical element in the environment (Chen, ¶0020, “a user may select a collection of documents (e.g., photos) from a virtual interface and spread those documents out across a real-world table or virtual surface external to the application to view them as if they were indeed actual physical documents.”). As to claim 17, claim 16 is incorporated and the combination of Chen and Williams discloses the second region of the environment is associated with a surface of the physical element in the environment (Chen, ¶0044, “one or more of the potential projection surfaces (e.g., the surfaces 218, 220) are real-world surfaces visible through projection optics of the system 200. For example, the surface 218 may be the user's coffee table or kitchen table and the surface 220 may be the user's wall, refrigerator, etc.”). As to claim 18, claim 16 is incorporated and the combination of Chen and Williams discloses determining a display size of a virtual object as a function of a size of the physical element (Chen, ¶0051, “determine information, such as physical attributes of the user and/or physical attributes of the selected surface (if the selected surface is a physical surface) including without limitation size, user height, user and surface location (e.g., separation of the surface and the user relative to one another), surface orientation, etc.”. ¶0054, “the virtual content surface re-mapper selects positions for the virtual objects based on the orientation (e.g., vertical or horizonal) of the designated surface 220 and/or a size or aspect ratio of the designated surface 220.”. ¶0056, “the selected virtual objects are photographs, the virtual surface content re-mapper may select a realistic size for presenting each photograph (e.g., 3×4 inches or 5×7 inches). Alternatively, the virtual surface content re-mapper may select a size for presenting each photograph that appears realistic relative to the size of the designated surface and/or the user. For example, the re-projected virtual content items may appear to have a size relative to the selected surface that is similar to a ratio of the corresponding real-world object and surface size.” ¶0065). As to claim 19, the combination of Chen and Williams discloses a device comprising: one or more processors; a non-transitory memory; and one or more programs stored in the non-transitory memory, which, when executed by the one or more processors, cause the device to: display multiple virtual objects in a first viewing arrangement having two-dimensional appearance in a first region of an environment that is bounded, wherein the multiple virtual objects are arranged and displayed in a first spatial arrangement in the bounded first region as a cluster with spatial relationships; obtain a user input corresponding to a request to change to a second viewing arrangement in a second region of the environment; determine a mapping between the first spatial arrangement and a second spatial arrangement based on the spatial relationships; and display the multiple virtual objects in the second viewing arrangement in the second region of the environment, wherein the multiple virtual objects are arranged and displayed as the cluster in the second viewing arrangement having a three-dimensional appearance as the cluster with one or more of the spatial relationships preserved (See claim 1 for detailed analysis.). As to claim 20, the combination of Chen and Williams discloses a non-transitory memory storing one or more programs, which, when executed by one or more processors of a device, cause the device to: display multiple virtual objects in a first viewing arrangement having a two-dimensional appearance in a first region of an environment that is bounded, wherein the multiple virtual objects are arranged and displayed in a first spatial arrangement in the bounded first region as a cluster with spatial relationships; obtain a user input corresponding to a request to change to a second viewing arrangement in a second region of the environment; determine a mapping between the first spatial arrangement and a second spatial arrangement based on the spatial relationships; and display the multiple virtual objects in the second viewing arrangement in the second region of the environment, wherein the multiple virtual objects are arranged and displayed as the cluster in the second viewing arrangement having a three dimensional appearance as the cluster with one or more of the spatial relationships preserved (See claim 1 for detailed analysis.). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Champion et al. (US Pub 2019/0230346 A1) discloses transforming and/or upconverting objects and/or visual media from the 2D workspace and/or 2D webpages to the 3D workspace as 3D objects and/or stereoscopic output for display in the 3D workspace (Fig. 9E). Any inquiry concerning this communication or earlier communications from the examiner should be directed to YU CHEN whose telephone number is (571)270-7951. The examiner can normally be reached on M-F 8-5 PST Mid-day flex. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached on 571-272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /YU CHEN/Primary Examiner, Art Unit 2613
Read full office action

Prosecution Timeline

Mar 20, 2023
Application Filed
Feb 13, 2024
Response after Non-Final Action
Dec 02, 2024
Non-Final Rejection — §103
Mar 11, 2025
Applicant Interview (Telephonic)
Mar 11, 2025
Examiner Interview Summary
Mar 29, 2025
Response Filed
Apr 02, 2025
Final Rejection — §103
Aug 21, 2025
Examiner Interview Summary
Aug 21, 2025
Applicant Interview (Telephonic)
Aug 22, 2025
Request for Continued Examination
Aug 25, 2025
Response after Non-Final Action
Dec 01, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604497
THIN FILM TRANSISTOR AND ARRAY SUBSTRATE
2y 5m to grant Granted Apr 14, 2026
Patent 12597176
IMAGE GENERATOR AND METHOD OF IMAGE GENERATION
2y 5m to grant Granted Apr 07, 2026
Patent 12589481
TOOL ATTRIBUTE MANAGEMENT IN AUTOMATED TOOL CONTROL SYSTEMS
2y 5m to grant Granted Mar 31, 2026
Patent 12588347
DISPLAY DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12586265
LINE DRAWING METHOD, LINE DRAWING APPARATUS, ELECTRONIC DEVICE, AND COMPUTER READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
68%
Grant Probability
98%
With Interview (+29.9%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 1052 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month