Prosecution Insights
Last updated: April 19, 2026
Application No. 18/974,656

SYSTEM AND METHOD FOR SECURELY MANAGING REAL ESTATE RENTALS USING A MULTI-WINDOW DISPLAY SYSTEM

Non-Final OA §103
Filed
Dec 09, 2024
Examiner
OKEBATO, SAHLU
Art Unit
2625
Tech Center
2600 — Communications
Assignee
Altura Innovations, LLC
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
94%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
509 granted / 668 resolved
+14.2% vs TC avg
Strong +18% interview lift
Without
With
+18.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
38 currently pending
Career history
706
Total Applications
across all art units

Statute-Specific Performance

§101
1.1%
-38.9% vs TC avg
§103
63.7%
+23.7% vs TC avg
§102
19.2%
-20.8% vs TC avg
§112
12.6%
-27.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 668 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Jones et al., US PGPUB 20140129948 hereinafter referenced as Jones in view of Thomas, US Patent 8433650. As to claim 1, Jones discloses a computer-implemented method of customizing a multi-view display with interactive and dynamic features, comprising: receiving, by a processing unit of a display, a first request to cast at least a portion of a first user interface of a first computing device (e.g., sharing content of 302 onto the common display 326, fig. 2); receiving, by the processing unit of the display, a second request to cast at least a portion of a second user interface of a second computing device (e.g., sharing content of 304 onto the common display 326, fig. 2); securing, over a communication network, isolated encryption protocols for data transfer between the processing unit of the display to the first computing device and to the second computing device ([0019] in certain examples, a secure and private link can be created between one or more of these devices); receiving, by the processing unit, real-time first image data reflecting the at least a portion of the first user interface from the first computing device and real-time second image data reflecting the at least a portion of the first user interface from the first computing device ([0039] FIG. 2 depicts an exemplary embodiment of a method for the displaying of information on a common display unit 326 from multiple portable devices); processing the real-time first image data and the real-time second image data to extract a plurality of dynamic features that adjust in real-time to the real-time first image data and the real- time second image data; generating, by the processing unit, a custom dynamic view based on the plurality of dynamic features in at least two different windows associated with the real-time first image data and the real-time second image data, respectively ([0038] The common display area can also be configured such that users can include notes under each property. This feature may be helpful for the users to provide real time notes and feedback onto the screen and make notes or charts of the pros and cons for each property); creating mirrored image data based on the custom dynamic view for a display user interface of the display; transmitting the mirrored image data to the display ([0026] For example, a user can take a picture of a potential item or property under consideration, and the user can place the picture onto a common display area, which can be viewed by multiple users on a common screen or can be viewed on multiple screens by various users in different locations simultaneously); generating operations based on the action, wherein the operations include one or more operations at the first computing device; sending the generated operations to the first computing device; receiving updated real-time first image data from the first computing device; generating, by the processing unit, updated dynamic features in an updated custom dynamic view based on the updated real-time first image data ([0040] Each device 302, 304, 306 can also display relevant information regarding the property onto the screen, such as price, location, number of bedrooms and bathrooms, as well as the amount of square footage of each property. The information can be displayed in rows and columns and can be sorted by neighborhood information 311. This can be customized by the users via a central control, which can be accessed on one or more of the users' devices application settings); creating an updated mirrored imaged data based on the updated custom dynamic view; and transmitting the updated mirrored image data to the display ([0026] For example, a user can take a picture of a potential item or property under consideration, and the user can place the picture onto a common display area, which can be viewed by multiple users on a common screen or can be viewed on multiple screens by various users in different locations simultaneously). Jones does not explicitly disclose receiving an interaction with a dynamic feature associated with the real-time second image data to perform an action on one or more dynamic features associated with the real-time first image data. However, in the same endeavor, Thomas discloses receiving an interaction with a dynamic feature associated with the real-time second image data to perform an action on one or more dynamic features associated with the real-time first image data (perform various transaction tasks associated with their role in the process such as ordering services (See FIG. 19), accessing electronic documents saved in the `File Drawer` (See FIG. 8), etc. Tab Table screens are dynamic forms which the application server 45 allows to be displayed on each client device. Some tab tables are `dynamic,` meaning as information is entered, or actions are performed, the layout and information presented changes automatically on the screen to allow the user to proceed with the process automatically and more easily). Therefore, it would have been obvious to one of ordinary skill in the art to modify the disclosure of Jones to further include Thomas’s dynamic information sharing method in order to improve the information sharing process. As to claim 15, Jones discloses a display for customizing a multi-view display with interactive and dynamic features, comprising: a storage configured to store instructions (memory 114, fig. 1); and one or more processors (processor 116, fig. 1) configured to execute the instructions and cause the one or more processors to: receive a first request to cast at least a portion of a first user interface of a first computing device (e.g., sharing content of 302 onto the common display 326, fig. 2); receive a second request to cast at least a portion of a second user interface of a second computing device (e.g., sharing content of 304 onto the common display 326, fig. 2); secure, Over a Communication network, isolated encryption protocols for data transfer between the one or more processors of the display to the first computing device and to the second computing device ([0019] in certain examples, a secure and private link can be created between one or more of these devices); receive real-time first image data reflecting the at least a portion of the first user interface from the first computing device and real-time second image data reflecting the at least a portion of the first user interface from the first computing device; process the real-time first image data and the real-time second image data to extract a plurality of dynamic features that adjust in real-time to the real-time first image data and the real-time second image data ([0039] FIG. 2 depicts an exemplary embodiment of a method for the displaying of information on a common display unit 326 from multiple portable devices); generate a custom dynamic view based on the plurality of dynamic features in at least two different windows associated with the real-time first image data and the real- time second image data, respectively; create mirrored image data based on the custom dynamic view for a display user interface of the display; transmit the mirrored image data to the display ([0038] The common display area can also be configured such that users can include notes under each property. This feature may be helpful for the users to provide real time notes and feedback onto the screen and make notes or charts of the pros and cons for each property); generate operations based on the action, wherein the operations include one or more operations at the first computing device; send the generated operations to the first computing device; receive updated real-time first image data from the first computing device; generate updated dynamic features in an updated custom dynamic view based on the updated real-time first image data ([0040] Each device 302, 304, 306 can also display relevant information regarding the property onto the screen, such as price, location, number of bedrooms and bathrooms, as well as the amount of square footage of each property. The information can be displayed in rows and columns and can be sorted by neighborhood information 311. This can be customized by the users via a central control, which can be accessed on one or more of the users' devices application settings); and create an updated mirrored imaged data based on the updated custom dynamic view; transmit the updated mirrored image data to the display ([0026] For example, a user can take a picture of a potential item or property under consideration, and the user can place the picture onto a common display area, which can be viewed by multiple users on a common screen or can be viewed on multiple screens by various users in different locations simultaneously). Jones does not explicitly disclose receiving an interaction with a dynamic feature associated with the real-time second image data to perform an action on one or more dynamic features associated with the real-time first image data. However, in the same endeavor, Thomas discloses receiving an interaction with a dynamic feature associated with the real-time second image data to perform an action on one or more dynamic features associated with the real-time first image data (perform various transaction tasks associated with their role in the process such as ordering services (See FIG. 19), accessing electronic documents saved in the `File Drawer` (See FIG. 8), etc. Tab Table screens are dynamic forms which the application server 45 allows to be displayed on each client device. Some tab tables are `dynamic,` meaning as information is entered, or actions are performed, the layout and information presented changes automatically on the screen to allow the user to proceed with the process automatically and more easily). Therefore, it would have been obvious to one of ordinary skill in the art to modify the disclosure of Jones to further include Thomas’s dynamic information sharing method in order to improve the information sharing process. As to claim 20, Jones discloses a non-transitory computer readable medium comprising instructions (a processor 116 for executing computer-readable instructions), the instructions, when executed by a computing system, cause the computing system to: receive a first request to cast at least a portion of a first user interface of a first computing device (e.g., sharing content of 302 onto the common display 326, fig. 2); receive a second request to cast at least a portion of a second user interface of a second computing device (e.g., sharing content of 304 onto the common display 326, fig. 2); secure, over a communication network, isolated encryption protocols for data transfer between a processing unit of a display to the first computing device and to the second computing device ([0019] n certain examples, a secure and private link can be created between one or more of these devices); receive real-time first image data reflecting the at least a portion of the first user interface from the first computing device and real-time second image data reflecting the at least a portion of the first user interface from the first computing device; process the real-time first image data and the real-time second image data to extract a plurality of dynamic features that adjust in real-time to the real-time first image data and the real- time second image data ([0038] The common display area can also be configured such that users can include notes under each property. This feature may be helpful for the users to provide real time notes and feedback onto the screen and make notes or charts of the pros and cons for each property); generate a custom dynamic view based on the plurality of dynamic features in at least two different windows associated with the real-time first image data and the real-time second image data, respectively; create mirrored image data based on the custom dynamic view for a display user interface of the display; transmit the mirrored image data to the display ([0040] Each device 302, 304, 306 can also display relevant information regarding the property onto the screen, such as price, location, number of bedrooms and bathrooms, as well as the amount of square footage of each property. The information can be displayed in rows and columns and can be sorted by neighborhood information 311. This can be customized by the users via a central control, which can be accessed on one or more of the users' devices application settings); generate operations based on the action, wherein the operations include one or more operations at the first computing device; send the generated operations to the first computing device; receive updated real-time first image data from the first computing device; generate updated dynamic features in an updated custom dynamic view based on the updated real-time first image data ([0040] Each device 302, 304, 306 can also display relevant information regarding the property onto the screen, such as price, location, number of bedrooms and bathrooms, as well as the amount of square footage of each property. The information can be displayed in rows and columns and can be sorted by neighborhood information 311. This can be customized by the users via a central control, which can be accessed on one or more of the users' devices application settings); create an updated mirrored imaged data based on the updated custom dynamic view; and transmit the updated mirrored image data to the display ([0026] For example, a user can take a picture of a potential item or property under consideration, and the user can place the picture onto a common display area, which can be viewed by multiple users on a common screen or can be viewed on multiple screens by various users in different locations simultaneously). Jones does not explicitly disclose receiving an interaction with a dynamic feature associated with the real-time second image data to perform an action on one or more dynamic features associated with the real-time first image data. However, in the same endeavor, Thomas discloses receiving an interaction with a dynamic feature associated with the real-time second image data to perform an action on one or more dynamic features associated with the real-time first image data (perform various transaction tasks associated with their role in the process such as ordering services (See FIG. 19), accessing electronic documents saved in the `File Drawer` (See FIG. 8), etc. Tab Table screens are dynamic forms which the application server 45 allows to be displayed on each client device. Some tab tables are `dynamic,` meaning as information is entered, or actions are performed, the layout and information presented changes automatically on the screen to allow the user to proceed with the process automatically and more easily). Therefore, it would have been obvious to one of ordinary skill in the art to modify the disclosure of Jones to further include Thomas’s dynamic information sharing method in order to improve the information sharing process. As to claim 2, the combination of Jones and Thomas discloses the computer-implemented method of claim 1. The combination further discloses authenticating connection between the processing unit and the first computing device using a secure key exchange protocol; and authenticating connection between the processing unit and the second computing device using another secure key exchange protocol, wherein the first computing device is isolated from the second computing device (Jones, [0017] For example, the computer data may include security information, bank account information, user login/profile information, service provider list and related information, and/or other information. This data may also be used to support one or more of the numerous features disclosed throughout this disclosure). As to claim 3, the combination of Jones and Thomas discloses the computer-implemented method of claim 1. The combination further discloses generating a scannable symbol to be displayed on the display, wherein the first request is based on a scan by the first computing device (Jones, [0042] One the particular applicant is approved, the prequalification letter could be sent directly to the user's device on an application such that the user could display the prequalification letter to an agent or a bar code could be provided to the user's device for reading by an agent's scanner or reader). As to claim 4, the combination of Jones and Thomas discloses the computer-implemented method of claim 1. The combination further discloses the updated real-time first image data is generated based on real-time sensor data from Internet of Things (IoT) devices in a real-world setting (Thomas, (351) The system design, the workflow, (See FIGS. 1, 1a) virtual office (See FIG. 4) and virtual desktop (See FIGS. 5, 6) can be adapted for use in other markets or industries. Some other markets or industries can include, for example, international trade and shipping). As to claim 5, the combination of Jones and Thomas discloses the computer-implemented method of claim 4. The combination further discloses the updated real-time first image data includes a virtual tour view that is generated based on the real-time sensor data (Thomas, as shown in fig. 1b, for example: system and process 45, 45W can send previously saved information 50, 75, 245, 265, 270, 275, 1350, 1360, 1660 for example text, audio, graphic, video, or driving directions 1380 to a wireless computing device or cell phone; send for example text, audio, graphic or video description or `tour` of the property describing for example the neighborhood and home features to a wireless computing device). As to claim 6, the combination of Jones and Thomas discloses the computer-implemented method of claim 5. The combination further discloses the virtual tour view a generated virtual reality (VR) view or an augmented reality (AR) view that includes features based on the real-time sensor data (Thomas, FIG. 3a is a view depicting Detailed Virtual Real Estate Office Information and Document Workflow Process). As to claim 7, the combination of Jones and Thomas discloses the computer-implemented method of claim 4. The combination further discloses one of the operations at the first computing device is to filter historical footage based on one or more parameters associated with the action, wherein the updated real-time first image data includes features associated with the filtered historical footage (Jones, [0038] The common display area can also be configured such that users can include notes under each property. This feature may be helpful for the users to provide real time notes and feedback onto the screen and make notes or charts of the pros and cons for each property). As to claim 8, the combination of Jones and Thomas discloses the computer-implemented method of claim 7. The combination further discloses analyzing the updated real-time first image data based on one of the actions at the second computing device; summarizing the analyzed real-time footage; and generating a new dynamic feature based on the summarization, wherein the updated custom dynamic view includes the new dynamic feature (Thomas, When buyer 15 is, for example driving in the neighborhood, enters information on a property from a for sale sign, or visits a home for viewing 245 or at any time, process and system can perform automatically 45, 45W and where possible simultaneously one or more tasks or processes, for example: system and process 45, 45W can send previously saved information 50, 75, 245, 265, 270, 275, 1350, 1360, 1660 for example text, audio, graphic, video, or driving directions 1380 to a wireless computing device or cell phone, col. 57, lines 9-17). As to claim 9, the combination of Jones and Thomas discloses the computer-implemented method of claim 4. The combination further discloses receiving an input from a user associated with the second computing device, either at the display or from the second computing device; generating second actions based on the received input, wherein the actions include one or more second actions at the first computing device; sending the one or more second actions to the first computing device; receiving a second updated real-time first image data that shows an updated virtual tour view based on the input; generating, by the processing unit, a second updated custom dynamic view; creating a second updated mirrored imaged data based on the second updated custom dynamic view; and transmitting the second updated mirrored image data to the display (Jones, [0040] Each device 302, 304, 306 can also display relevant information regarding the property onto the screen, such as price, location, number of bedrooms and bathrooms, as well as the amount of square footage of each property. The information can be displayed in rows and columns and can be sorted by neighborhood information 311. This can be customized by the users via a central control, which can be accessed on one or more of the users' devices application settings). As to claim 10, the combination of Jones and Thomas discloses the computer-implemented method of claim 9. The combination further discloses inputting, at a machine-learning model applied by the processing unit, the input and the real-time first image data; and outputting, by the machine-learning model, the second actions (Thomas, Each virtual office will be one integrated computer program for each user from which they can receive, view, enter, change, save data, perform their particular functions in the transaction, create, save and send all documents, order services from third-party service providers, track the transaction, and communicate with other participants in the particular transaction; col. 22, lines 18-24). As to claim 11, the combination of Jones and Thomas discloses the computer-implemented method of claim 9. The combination further discloses the updated virtual tour view includes changes to an overlay that provides secondary information about a respective real estate property (Jones, [0033] The common display area can also be configured to overlay the content from the users on a map so that the users can see where exactly each property is located with respect to other properties being pushed to the screen by other users). As to claim 12, the combination of Jones and Thomas discloses the computer-implemented method of claim 1. The combination further discloses the updated custom dynamic view includes an augmented reality view, further comprising: receiving a verification of a physical location of the second computing device; determining the physical location is in a restricted zone; and authenticating the second computing device for access to sensitive information (Thomas, if necessary lender can receive back buyer's acceptance of loan prequalification or preapproval letter (See FIG. 17) in Virtual Mortgage Office 270 (See FIGS. 3b, 26, 26a), and acknowledgement of receipt of any disclosure documents as necessary). As to claim 13, the combination of Jones and Thomas discloses the computer-implemented method of claim 1. The combination further discloses the second actions includes adding a new window in the second updated custom dynamic view, further comprising: generating a new window based on the received updated real-time first image data and the updated real-time second image data (Jones, [0040] n that each of the users can provide input onto a common screen for vetting by all of the users. Additional options can be provided on the common display unit 326, such as financing, applying for a showing, school information, monthly mortgage payment amounts, and the like). As to claim 14, the combination of Jones and Thomas discloses the computer-implemented method of claim 13. The combination further discloses inputting, at a machine-learning model applied by the processing unit, the updated real- time first image data; outputting, by the machine-learning model, features for an additional view; and triggering an action to add an additional view in the new window of the updated custom dynamic view, wherein the additional view in the updated custom dynamic view includes the features (Thomas, if necessary lender can receive back buyer's acceptance of loan prequalification or preapproval letter (See FIG. 17) in Virtual Mortgage Office 270 (See FIGS. 3b, 26, 26a), and acknowledgement of receipt of any disclosure documents as necessary). As to claim 16, the combination of Jones and Thomas discloses the display of claim 15. The combination further discloses one of the operations at the first computing device is to filter historical footage based on one or more parameters associated with the action, wherein the updated real-time first image data includes features associated with the filtered historical footage, wherein the one or more processors are configured to execute the instructions and cause the one or more processors to: analyzing the updated real-time first image data based on one of the actions at the second computing device; summarizing the analyzed real-time footage; and generating a new dynamic feature based on the summarization, wherein the updated custom dynamic view includes the new dynamic feature (Thomas, When buyer 15 is, for example driving in the neighborhood, enters information on a property from a for sale sign, or visits a home for viewing 245 or at any time, process and system can perform automatically 45, 45W and where possible simultaneously one or more tasks or processes, for example: system and process 45, 45W can send previously saved information 50, 75, 245, 265, 270, 275, 1350, 1360, 1660 for example text, audio, graphic, video, or driving directions 1380 to a wireless computing device or cell phone, col. 57, lines 9-17). As to claim 17, the combination of Jones and Thomas discloses the display of claim 15. The combination further discloses the one or more processors are configured to execute the instructions and cause the one or more processors to: receiving an input from a user associated with the second computing device, either at the display or from the second computing device; generating second actions based on the received input, wherein the actions include one or more second actions at the first computing device; sending the one or more second actions to the first computing device; receiving a second updated real-time first image data that shows an updated virtual tour view based on the input; generating a second updated custom dynamic view; creating a second updated mirrored imaged data based on the second updated custom dynamic view; and transmitting the second updated mirrored image data to the display (Jones, [0040] Each device 302, 304, 306 can also display relevant information regarding the property onto the screen, such as price, location, number of bedrooms and bathrooms, as well as the amount of square footage of each property. The information can be displayed in rows and columns and can be sorted by neighborhood information 311. This can be customized by the users via a central control, which can be accessed on one or more of the users' devices application settings). As to claim 18, the combination of Jones and Thomas discloses the display of claim 17. The combination further discloses the one or more processors are configured to execute the instructions and cause the one or more processors to: inputting, at a machine-learning model applied by a processing unit, the input and the real-time first image data; and outputting, by the machine-learning model, the second actions (Thomas, Each virtual office will be one integrated computer program for each user from which they can receive, view, enter, change, save data, perform their particular functions in the transaction, create, save and send all documents, order services from third-party service providers, track the transaction, and communicate with other participants in the particular transaction; col. 22, lines 18-24). As to claim 19, the combination of Jones and Thomas discloses the display of claim 17. The combination further discloses the updated virtual tour view includes changes to an overlay that provides secondary information about a real estate property (Jones, [0040] in that each of the users can provide input onto a common screen for vetting by all of the users. Additional options can be provided on the common display unit 326, such as financing, applying for a showing, school information, monthly mortgage payment amounts, and the like). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Azuma et al., US PGPUB 20240015263 discloses Methods, apparatus, systems, and articles of manufacture are disclosed to provide remote telepresence communication. At least one non-transitory machine-readable medium comprises instructions that, when executed, cause a processor to identify features from a plurality of images, the plurality of images representing a first user and a second user, create a first representation of the first user, a second representation of the second user, the representations created using the plurality of images, the representations representing the respective users at specified distances and specified perspectives from a viewer, and construct a first image, the first image including the second model at a specified location within a shared environment, the first image to be presented on a first display. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SAHLU OKEBATO whose telephone number is (571)270-3375. The examiner can normally be reached Mon - Fri 8:00 - 5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, WILLIAM BODDIE can be reached at 571-272-0666. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SAHLU OKEBATO/Primary Examiner, Art Unit 2625 11/15/2025
Read full office action

Prosecution Timeline

Dec 09, 2024
Application Filed
Nov 15, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594450
MOTOR FUNCTION REHABILITATION SYSTEM AND METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12596511
INFORMATION PROCESSING METHOD, INFORMATION PROCESSING DEVICE, AND NON-TRANSITORY COMPUTER READABLE RECORDING MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12585162
DISPLAYING IMAGES ON TOTAL INTERNAL REFLECTIVE DISPLAYS
2y 5m to grant Granted Mar 24, 2026
Patent 12586547
COMPENSATION DEVICE AND METHOD FOR DISPLAY APPARATUS, DISPLAY APPARATUS, AND COMPUTER STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12582002
LEFT AND RIGHT PROJECTORS FOR DISPLAY DEVICE
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
94%
With Interview (+18.0%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 668 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month