Prosecution Insights
Last updated: April 19, 2026
Application No. 17/345,429

Situationally Aware Social Agent

Non-Final OA §103
Filed
Jun 11, 2021
Examiner
HICKS, AUSTIN JAMES
Art Unit
2142
Tech Center
2100 — Computer Architecture & Software
Assignee
Disney Enterprises Inc.
OA Round
5 (Non-Final)
76%
Grant Probability
Favorable
5-6
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
308 granted / 403 resolved
+21.4% vs TC avg
Strong +25% interview lift
Without
With
+25.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
54 currently pending
Career history
457
Total Applications
across all art units

Statute-Specific Performance

§101
13.9%
-26.1% vs TC avg
§103
46.3%
+6.3% vs TC avg
§102
17.3%
-22.7% vs TC avg
§112
19.2%
-20.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 403 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/19/2025 has been entered. Response to Arguments Applicant’s arguments with respect to claims 1, 4-11, 13-14, 17-24 and 26-31 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 4, 10, 11, 13, 14, 17, 23, 24 and 26-31 are rejected under 35 U.S.C. 103 as being unpatentable over US20170289766A1 to Scott et al, Yet another arduino sound localizer (2 microphones, angle of arrival) by peconsti and US20180227694A1 to King. Claims 6, 5, 18 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over US20170289766A1 to Scott et al, Yet another arduino sound localizer (2 microphones, angle of arrival) by peconsti, US20180227694A1 to King and US20200320427A1 to Kennedy et al. Claims 7, 9, 21 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over US20170289766A1 to Scott et al, Yet another arduino sound localizer (2 microphones, angle of arrival) by peconsti, US20180227694A1 to King and US20200184306A1 to Buhman et al. Claims 8 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over US20170289766A1 to Scott et al, Yet another arduino sound localizer (2 microphones, angle of arrival) by peconsti, US20180227694A1 to King, US20200184306A1 to Buhman et al and US20190198006A1 to Guo et al. Scott teaches Claims 1 and 14 (Currently Amended): A system comprising: a processing hardware; (Scott fig. 1 [104]) a memory storing a software code, (Scott fig. 1 [106]) a radio detection and ranging (radar) or a radar detector configured to collect radar data; (Scott para 43 “The physical presence of people (i.e. people nearby the system) may be detected using sensors 132 like…microwave radar…”) at least one (Scott, Paragraph [0043], “The physical presence of people (i.e. people nearby the system) may be detected using sensors 132 like…microwave radar, microphones or cameras,… modalities like radar can provide more fine-grained information, that can include a positioning element (e.g. x/y/z position relative to the PC)”) a social agent instantiated as a robot, a virtual character, or a tabletop or wall-mounted device, the social agent comprising an output unit configured to effectuate an interactive expression of the social agent, (Scott fig. 1 client device includes several tabletop device with screens and “first digital assistant” (Scott para 106) that are equivalent to Applicant’s social agent. The interactive expression is taught in the transition between experiences in Scott para 106, “the first digital assistant experience is adapted to generate a second digital assistant experience at the client device that is based on a difference between the first detected distance and the second detected distance (block 706).”) the processing hardware configured to execute the software code to: process the radar data to obtain radar-based location data corresponding to a location of a user within a venue; (Scott para 50 “Position: As noted above, radar or camera-based sensors 132 may provide a position for one or multiple users. The position is then used to infer context, e.g. approaching the client device 102, moving away from the client device 102, presence in a different room than the client device 102, and so forth.”) process the audio data (Scott para 47 “The physical presence of people (i.e. people nearby the system) may be detected using… microphones…” Scott para 21 “In another example, a microphone may be employed to measure loudness of the environment, and change the system behavior prior to receiving any voice input, such as by showing a prompt on the screen that changes when someone walks closer to a reference point.”) correlate the radar-based location data and the audio-based location data with the (Scott para 47 “he physical presence of people (i.e. people nearby the system) may be detected using sensors 132 … microphones or cameras, and using techniques such as Doppler radar …”)_ identify, based on the location of the user, the determined environment and the audio-based venue data, an interactive expression for use by the social agent to interact with the user, wherein the interactive expression is identified using a portion of the audio data collected from the source of sound other than the user; and (Scott para 21 “In another example, a microphone may be employed to measure loudness of the environment, and change the system behavior prior to receiving any voice input, such as by showing a prompt on the screen that changes when someone walks closer to a reference point.” The interactive expression is the prompt. The loudness of the environment is the portion of the audio collected from a source other than the user.) execute, using the output unit, the interactive expression used by the social agent, wherein the interactive expression includes context from the portion of the audio data collected from the source of sound. (Scott para 21 “In another example, a microphone may be employed to measure loudness of the environment, and change the system behavior prior to receiving any voice input, such as by showing a prompt on the screen that changes when someone walks closer to a reference point.” Showing the prompt on the screen is executing the identified expression.) Scott doesn’t teach directional microphone configured to collect audio data and identify an angle of arrival of the audio data. However, peconsti teaches directional microphone configured to collect audio data and identify an angle of arrival of the audio data. (peconsti second paragraph “Thus, it is need to estimate the angle of arrival of a given sound. To achieve that, an electronic circuit to acquire signals from 2 microphones into the arduino was built.” The two microphones are one directional microphone.) Scott, the claims and peonsti are all directed to microphone applications. It would have been obvious to a person having ordinary skill in the art, at the time of filing, to receive angle of arrival data “in order to achieve proper monitoring, the monitoring system should be able to attribute sounds to the specific [person]…” peconsti first paragraph. Scott doesn’t teach a map of the venue. However, King teaches a memory storing a software code, and a map of a venue; (King para 55 “The database may be used by algorithms to present a display of a seating map of a specific venue…” King para 15 “obtaining spatial reference data for a specific venue. The method also includes creating a digital model of the specific venue.”) correlate the radar-based location data and the audio-based location data with the map of the venue to determine the environment surrounding the user within the venue; (King para 82 “navigation matrix of panoramic video and audio viewports that in a particular geographic location or venue;…” correlating radar data, audio and map data is taught by the navigation matrix.) King, Scott and the claims deal with detecting audio in different environments. It would have been obvious to a person having ordinary skill in the art, at the time of filing, to include spatial reference data of a specific venue in Scott because “the representation of the specific venue may also include a representation of a specific stage or other performance venue may be superimposed with graphical depiction of historical data related to the venue. In some embodiments such a representation may aid in a process of designing audio capture locations for a future spectator event.” King para 65. Improving audio quality based on the venue will improve the robots accuracy in collecting audio data. Scott teaches Claims 4 and 17 (Original): The system of claim 1, wherein the interactive expression comprises one of speech or text. (Scott para 21 “In another example, a microphone may be employed to measure loudness of the environment, and change the system behavior prior to receiving any voice input, such as by showing a prompt on the screen that changes when someone walks closer to a reference point.” Scott para 67 “very large characters can be displayed that provide simple messages and/or prompts, such as “Hello!,” “May I Help You?,” and so forth.” Scott para 65 “digital assistant 126 outputs an audio prompt…”) Scott teaches Claims 5 and 18 (Previously Presented): The system of claim 1, wherein the social agent is instantiated (Scott para 67 “very large characters can be displayed that provide simple messages and/or prompts, such as “Hello!,” “May I Help You?,” and so forth.”) Scott’s characters are different than applicant’s virtual character. However, Kennedy teaches the virtual character. (Kennedy para 16 “social agent may take the form of a virtual character rendered on a display…”) Kennedy, Scott and the claims are all directed to social agents. It would have been obvious to a person having ordinary skill in the art, at the time of filing, to have a virtual character relay Scott’s messages because devices without virtual characters “tend to lack character and naturalness…” Kennedy para 14. Scott teaches Claims 6 and 19 (Previously Presented): The system of claim 4, wherein the interactive expression (Scott para 67 displays a text expression.) Scott doesn’t teach a gesture etc. However, Kennedy teaches that expression comprises at least one of a gesture, a facial expression, or a posture. (Kennedy para 71 “where personality profile 646 b of the character assumed by interactive social agent 116 a or 116 b is that of an evil villain, the expression smile-smile-smile might be remapped to modified expression (sneer-sneer-sneer) 648.”) Kennedy, Scott and the claims are all directed to social agents. It would have been obvious to a person having ordinary skill in the art, at the time of filing, to have a virtual character relay Scott’s messages with a facial expression because devices without virtual characters “tend to lack character and naturalness…” Kennedy para 14. Scott teaches Claims 7 and 21 (Previously Presented): The system of claim 1, wherein the processing hardware is further configured to execute the software code to: recognize, (Scott para 52 ” Identity recognition can employ camera-based face recognition or more coarse-grained recognition techniques that approximate the identity of a user.” Scott para 49 “When the identity of a user is known (such as discussed below), it is possible to apply a different speech recognition model that actually fits the user's accent, language, acoustic speech frequencies, and demographic.”) Scott doesn’t teach the anonymous user history. However, Buhman teaches how to recognize, using the audio data, the user as an anonymous user with whom the social agent has previously interacted. (“virtual agent 150/350 is typically able to distinguish one anonymous human guest with whom a previous character interaction has occurred from another, as well as from anonymous human guests having no previous interaction experience with the character…” and “the presence of guest 126a/126b or guest object(s) 148 can be detected based on sensor data received from input module 130/230. That sensor data may also be used to reference interaction history database 108 to identify guest 126a/126b” where input module 130/230 contains a microphone to utilize audio data in (Buhmann, Drawings, Figure 2, 238).) The claims Buhmann as well as Scott are directed towards the field of interactive agents. It would have been obvious to a person having ordinary skill in the art, at the time of filing, to recognize anonymous users “in order to improve the performance of virtual agent…” Buhmann para 63. Scott teaches Claim 8 (Previously Presented): The system of claim 7, further comprising: a wherein the hardware processor is configured to execute the software code (Scott para 52 ” Identity recognition can employ camera-based face recognition or more coarse-grained recognition techniques that approximate the identity of a user.”) The Scott/Buhmann combination fails to explicitly teach recognizing a user using a trained machine learning model. However, Guo teaches a trained machine learning model. (Guo para 92 “these augmented features are then used to assess the probability that a particular word, phoneme, or phone was heard. In more modern systems, this computation is performed by a specially trained deep neural network.”) The claims, Guo, Scott and Buhman are all directed to interactive agents. It would have been obvious to a person having ordinary skill in the art, at the time of filing, to use a trained NN to recognize the user because the “neural networks or GMMs may optionally be trained for a specific individual to give improved results.” Guo para 92. Scott teaches Claims 9 and 22 (Previously Presented): The system of claim 1, wherein the memory further stores an interaction history (Scott, Paragraph [0095], “Different contextual factors are detailed throughout this discussion, and include information such as…interaction history with the digital assistant, and so forth.” And (Scott, [0018]) “the digital assistant can respond to queries, provide appropriate information, offer suggestions, adapt UI visualizations, and takes actions to assist the user depending on the context and sensor data…”) Scott doesn’t teach a database for holding the interaction history. However, Buhmann teaches an interaction history database. (Buhmann para 54 “identification of guest 126 a/126 b or guest object(s) 148 may be performed by software code 110, executed by hardware processor 104, and using input module 130/230 and interaction history database 108.”) The claims, Scott and Buhmann are directed towards the field of interactive agents. It would have been obvious before the effective filing date of the claimed invention to combine the teachings of Buhmann with the teachings of Scott by performing interactions that take into account previous interactions with a user that are stored in a database. Buhmann provides as additional motivation for combination (Buhmann, Background, " conventional conversational agents…omit many of the properties that a real human would offer in an interaction that make that interaction not only informative but also enjoyable or entertaining. For example, an interaction between two humans may be influenced or shaped by the personalities of those human participants, as well as the history or storyline of their previous interactions. Thus, there remains a need in the art for a virtual agent capable of engaging in an extended interaction…"). Scott teaches Claims 10 and 23 (Previously Presented): The system of claim 1, wherein the processing hardware is further configured to execute the software code to: recognize, using at least one of the radar-based location data or the audio-based location data, a relocation of the user relative to the social agent. (Scott, Paragraph [0043], “The physical presence of people (i.e. people nearby the system) may be detected using sensors 132 like…microwave radar, microphones or cameras,… modalities like radar can provide more fine-grained information, that can include a positioning element (e.g. x/y/z position relative to the PC)” and (Scott, [0052]) “As the person approaches the system, indications such as icons, animations, and/or audible alerts may be output to signal that different types of interaction are active…”). Scott teaches Claims 11 and 24 (Previously Presented): The system of claim 1, wherein the processing hardware is further configured to execute the software code to enhance, using at least one of sound produced by or a data input received from the source of sound, a signal-to-noise ratio of the audio data. (Scott, Paragraph [0045], “the system (e.g., the client device 102) can disambiguate between multiple sound sources, such as by filtering out the position of a known noise- producing device (e.g., a television) or background noise…” This claim is missing a and/or, examiner interprets the claim to mean a data received from the source or sound or a signal to noise ratio.) Scott teaches Claims 13 and 26 (Currently Amended): The system of claim 1, wherein the processing hardware is further configured to execute the software code to: process the radar data to obtain radar-based venue data corresponding to the environment surrounding the user, and recognize, using the radar-based venue data, the source of sound as an inanimate source of sound, (Scott para 47 “The physical presence of people (i.e. people nearby the system) may be detected using sensors 132 like… microwave radar, microphones…” Scott para 49 “the system (e.g., the client device 102) can disambiguate between multiple sound sources, such as by filtering out the position of a known noise-producing device (e.g., a television) or background noise.”) recognize, using the map of the venue, the source of sound as the inanimate source of sound. (Scott para 49 “the system (e.g., the client device 102) can disambiguate between multiple sound sources, such as by filtering out the position of a known noise-producing device (e.g., a television) or background noise.”) Claims 27 and 30 (Previously presented): The system of claim 1, wherein: the radar data and the audio data are timestamped, and the processing hardware is further configured to execute the software code to correlate the radar-based location data and the audio-based location data to determine the location of the user at a given point in time. (Scott para 18 “Aspects of digital assistant experience based on presence sensing include using presence sensing… and adapt the visual experience based on … context information such as the time of day…”) Scott teaches Claims 28 and 31 (Previously presented): The system of claim 1, wherein: the source of sound comprises an entertainment system or a person different from the user, and the interactive expression incorporates a subject matter of an output of the entertainment system or speech of the person. (Scott para 49 “the system (e.g., the client device 102) can disambiguate between multiple sound sources, such as by filtering out the position of a known noise-producing device (e.g., a television) or background noise.”) Scott teaches Claim 29 (Previously presented): The system of claim 1, wherein the output unit comprises at least one of a Text-To-Speech (TTS) module, a speaker, a display, a mechanical actuator, or a haptic actuator configured to effectuate the interactive expression. (Scott para 73 “if the system has access to multiple speakers, different speakers can be chosen for output to Bob and Alice, and respective volume levels at the different speakers can be optimized for Bob and Alice.”) Scott teaches Claim 20 (Currently Amended): The method of claim 14, further comprising: A method for use by a system including a processing hardware and a memory storing (Scott fig. 6) receiving, by the software code executed by the processing hardware, radar data and audio data collected by at least one (Scott, Paragraph [0043], “The physical presence of people (i.e. people nearby the system) may be detected using sensors 132 like…microwave radar, microphones or cameras,… modalities like radar can provide more fine-grained information, that can include a positioning element (e.g. x/y/z position relative to the PC)”) identifying, by the software code executed by the processing hardware, (Scott, Paragraph [0043], “The physical presence of people (i.e. people nearby the system) may be detected using sensors 132 like…microwave radar, microphones…” processing, by the software code executed by the processing hardware, the radar data, the audio data, (Scott para 50 “Position: As noted above, radar or camera-based sensors 132 may provide a position for one or multiple users. The position is then used to infer context, e.g. approaching the client device 102, moving away from the client device 102, presence in a different room than the client device 102, and so forth.” The venue is the place where the client device is.) recognizing, by the software code executed by the processing hardware, using the (Scott para 52 ” Identity recognition can employ camera-based face recognition or more coarse-grained recognition techniques that approximate the identity of a user.” Scott para 49 “When the identity of a user is known (such as discussed below), it is possible to apply a different speech recognition model that actually fits the user's accent, language, acoustic speech frequencies, and demographic.”) processing, by the software code executed by the processing hardware, the radar data, the audio data, (Scott para 21 “In another example, a microphone may be employed to measure loudness of the environment, and change the system behavior prior to receiving any voice input, such as by showing a prompt on the screen that changes when someone walks closer to a reference point.” The interactive expression is the prompt.) correlating, by the software code executed by the processing hardware, the radar-based venue data and the audio-based venue data (Scott para 47 “he physical presence of people (i.e. people nearby the system) may be detected using sensors 132 … microphones or cameras, and using techniques such as Doppler radar …”)_ identifying, by the software code executed by the processing hardware based on the location of the at least one user and the determined environment, an interactive expression for use by the social agent to interact with the at least one user; and (Scott para 21 “In another example, a microphone may be employed to measure loudness of the environment, and change the system behavior prior to receiving any voice input, such as by showing a prompt on the screen that changes when someone walks closer to a reference point.” The interactive expression is the prompt.) executing, by the software code executed by the processing hardware, the interactive expression used by the social agent. (Scott para 21 “In another example, a microphone may be employed to measure loudness of the environment, and change the system behavior prior to receiving any voice input, such as by showing a prompt on the screen that changes when someone walks closer to a reference point.” Showing the prompt on the screen is executing the identified expression.) Scott doesn’t teach directional microphone configured to collect audio data and identify an angle of arrival of the audio data. However, peconsti teaches directional microphone configured to collect audio data and identify an angle of arrival of the audio data. (peconsti second paragraph “Thus, it is need to estimate the angle of arrival of a given sound. To achieve that, an electronic circuit to acquire signals from 2 microphones into the arduino was built.” The two microphones are one directional microphone.) Scott, the claims and peonsti are all directed to microphone applications. It would have been obvious to a person having ordinary skill in the art, at the time of filing, to receive angle of arrival data “in order to achieve proper monitoring, the monitoring system should be able to attribute sounds to the specific [person]…” peconsti first paragraph. Scott doesn’t teach the anonymous user history. However, Buhman teaches how to recognizing, by the software code executed by the processing hardware and using the (“virtual agent 150/350 is typically able to distinguish one anonymous human guest with whom a previous character interaction has occurred from another, as well as from anonymous human guests having no previous interaction experience with the character…” and “the presence of guest 126a/126b or guest object(s) 148 can be detected based on sensor data received from input module 130/230. That sensor data may also be used to reference interaction history database 108 to identify guest 126a/126b” where input module 130/230 contains a microphone to utilize audio data in (Buhmann, Drawings, Figure 2, 238).) The claims Buhmann as well as Scott are directed towards the field of interactive agents. It would have been obvious to a person having ordinary skill in the art, at the time of filing, to recognize anonymous users “in order to improve the performance of virtual agent…” Buhmann para 63. Buhmann and Scott don’t teach a trained NN. However, Guo teaches a trained NN. (Guo para 92 “hese augmented features are then used to assess the probability that a particular word, phoneme, or phone was heard. In more modern systems, this computation is performed by a specially trained deep neural network.”) The claims, Guo, Scott and Buhman are all directed to interactive agents. It would have been obvious to a person having ordinary skill in the art, at the time of filing, to use a trained NN to recognize the user because the “neural networks or GMMs may optionally be trained for a specific individual to give improved results.” Guo para 92. Scott doesn’t teach a map of the venue. However, King teaches a memory storing a software code, and a map of a venue; (King para 55 “The database may be used by algorithms to present a display of a seating map of a specific venue…” King para 15 “obtaining spatial reference data for a specific venue. The method also includes creating a digital model of the specific venue.”) correlating, by the software code executed by the processing hardware, the radar-based venue data and the audio-based venue data with the map of the venue to determine the environment surrounding the at least one user within the venue; (King para 82 “navigation matrix of panoramic video and audio viewports that in a particular geographic location or venue;…” correlating radar data, audio and map data is taught by the navigation matrix.) King, Scott and the claims deal with detecting audio in different environments. It would have been obvious to a person having ordinary skill in the art, at the time of filing, to include spatial reference data of a specific venue in Scott because “the representation of the specific venue may also include a representation of a specific stage or other performance venue may be superimposed with graphical depiction of historical data related to the venue. In some embodiments such a representation may aid in a process of designing audio capture locations for a future spectator event.” King para 65. Improving audio quality based on the venue will improve the robots accuracy in collecting audio data. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Austin Hicks whose telephone number is (571)270-3377. The examiner can normally be reached Monday - Thursday 8-4 PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mariela Reyes can be reached at (571) 270-1006. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AUSTIN HICKS/Primary Examiner, Art Unit 2142
Read full office action

Prosecution Timeline

Jun 11, 2021
Application Filed
Aug 27, 2024
Non-Final Rejection — §103
Jan 02, 2025
Response Filed
Apr 04, 2025
Final Rejection — §103
Jun 02, 2025
Response after Non-Final Action
Jun 16, 2025
Request for Continued Examination
Jun 20, 2025
Response after Non-Final Action
Jun 30, 2025
Non-Final Rejection — §103
Oct 02, 2025
Response Filed
Oct 14, 2025
Final Rejection — §103
Dec 19, 2025
Response after Non-Final Action
Jan 09, 2026
Request for Continued Examination
Jan 26, 2026
Response after Non-Final Action
Jan 29, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591767
NEURAL NETWORK ACCELERATION CIRCUIT AND METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12554795
REDUCING CLASS IMBALANCE IN MACHINE-LEARNING TRAINING DATASET
2y 5m to grant Granted Feb 17, 2026
Patent 12530630
Hierarchical Gradient Averaging For Enforcing Subject Level Privacy
2y 5m to grant Granted Jan 20, 2026
Patent 12524694
OPTIMIZING ROUTE MODIFICATION USING QUANTUM GENERATED ROUTE REPOSITORY
2y 5m to grant Granted Jan 13, 2026
Patent 12524646
VARIABLE CURVATURE BENDING ARC CONTROL METHOD FOR ROLL BENDING MACHINE
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+25.1%)
3y 4m
Median Time to Grant
High
PTA Risk
Based on 403 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month