Prosecution Insights
Last updated: April 19, 2026
Application No. 18/680,870

SYSTEMS AND METHODS FOR COORDINATED OUTPUT OF A VISUAL EFFECT FOR ENHANCED AUDIENCE ENGAGEMENT AT A LIVE EVENT

Non-Final OA §102§103
Filed
May 31, 2024
Examiner
USSERY, CAIDEN ALEXANDER
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Adeia Guides Inc.
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
8 currently pending
Career history
8
Total Applications
across all art units

Statute-Specific Performance

§103
50.0%
+10.0% vs TC avg
§102
44.4%
+4.4% vs TC avg
§112
5.6%
-34.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-13, 16, & 20-21 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Harry Snyder (U.S. Pat. Pub. App. No. US-20150012308-A1, herein after “Snyder”). Regarding claims 1, 20, and 21, Snyder teaches [a] method comprising: identifying a plurality of devices at a live event “… a plurality of sets of instructions executable by each of a plurality of mobile devices possessed by respective ones of a plurality of attendees of at least one event at the selected one of the venues.” (Snyder, ¶ [0006]) where the devices are at an event; determining, for each respective device of the plurality of devices at the live event, a current position of the respective device within a venue of the live event by: (Snyder, ¶ [0029]) determining an initial position of the respective device “The at least one circuit may include at least one processor unit, and may be communicatively coupled to receive location specification information include geolocation coordinates derived by a geolocation system.” (Snyder, ¶ [0029]); and tracking, using data from one or more sensors, a position of the respective device within the venue relative to the initial position “Alternatively or additionally, the mobile device may identify or collect the respective location information associated with various attendees or mobile device via a global positioning system receiver or via triangulation with cellular or wireless antennas or base stations if the mobile device is sufficiently equipped” (¶ [0066]) where this may be used in addition to determining an initial position; performing recalibration of the current position by causing the respective device to capture an image, and analyzing the captured image; and “Alternatively or additionally, the mobile device may identify or collect the respective location information associated with various attendees or mobile device via imaging of a surrounding environment, from which at least approximate location can be discerned” (¶ [0066]) where this may be used in addition to determining the initial position. based on the determined current positions of the plurality of devices, causing the plurality of devices to output signals that are coordinated to form a visual effect “In one non-limiting example, an application running on a mobile device may receive a screen control sequence which causes the mobile device screen to output light in conjunction or in combination (e.g., spatially and/or temporally) with other mobile devices at a venue during an event” (¶ [0063]). In regards to claim 20, claim 1 is substantially similar to claim 20, hence the rejection analysis for claim 1 is also applied to claim 20. Snyder teaches the additional limitations that the method, of claim 1 and 20, is “performed by a server…” (¶ [0089]). In regards to claim 21, claim 1 is substantially similar to claim 21, hence the rejection analysis for claim 1 is also applied to claim 21. Snyder teaches the additional limitations of “A system comprising: a memory; a control circuitry configured to…” perform the functions laid out above in claims 1 and 21 (¶ [0094]). Regarding claim 2, Snyder teaches [t]he method of claim 1, wherein the identifying the plurality of devices at the live event further comprises (¶ [0006]): receiving, from the plurality of devices, a user interface input requesting to participate in the coordinated visual effect PNG media_image1.png 617 464 media_image1.png Greyscale Where the user is asked if they would like to join the mobile event ¶ [0052]. Regarding claim 3, Snyder teaches [t]he method of claim 1, wherein the signals that are coordinated to form the visual effect are caused to be output via a flashlight of the respective device “… the set of instructions specifying a temporal sequence of instructions to actuate at least one transducer of the respective mobile devices to emit at least one of light, sound, or light and sound which in totality form at least part of the light show, the sound show or the light and sound show.” (¶ [0008]) where the light may be produced by the screen or a camera flash. Regarding claim 4, Snyder teaches [t]he method of claim 1, wherein the signals that are coordinated to form the visual effect are caused to be output via a screen of the respective device such that the screen of the respective device provides one or more pixels of the visual effect ”This approach may advantageously control a large number of mobile devices as an ad hoc group or set, to essentially create an enormous display screen, each of the mobile devices or a subset of mobile devices (e.g., 4×4), being operated as an effective pixel in a visual or aural presentation.” (¶ [0062]). Regarding claim 5, Snyder teaches [t]he method of claim 4, wherein (¶ [0007]): performing the recalibration further comprises causing at least a first device and a second device of the plurality of devices to communicate via peer-to-peer communication, to identify a current position of the first device relative to a current position of the second device; and “Once the location for each attendee is specified, mobile devices of attendees at different locations can be controlled, for example relative to each other, in conjunction or in tandem as ad hoc groups or sets.” (¶ [0062]). causing the plurality of devices to output the signals that are coordinated to form the visual effect by causing the screen of the respective device to correspond to one or more pixels of the visual effect is based at least in part on the current position of the first device relative to a current position of the second device “For example, the mobile devices may be controlled to produce a defined or specified pattern of output based at least in part on the locations of mobile devices. Thus, one can design a visual output pattern spanning the venue or portion thereof, operating each mobile device or groups of mobile devices as respective pixels to create a visual effect (e.g., still or moving image, text).” (¶ [0062]). Regarding claim 6, Snyder teaches [t]he method of claim 1, further comprising causing the plurality of devices to output audio along with the signals that are coordinated to form the visual effect “One can additionally or alternatively design an aural output pattern spanning the venue or portion thereof, operating each mobile device or groups of mobile devices to create an aural effect (e.g., still or moving sound effect)” (¶ [0062]). Regarding claim 7, Snyder teaches [t]he method of claim 1 wherein the captured image is a first captured image, and determining the initial position of the respective device: causing the respective device to capture a second image of at least a portion of the venue of the live event “… the mobile device may identify or collect the respective location information associated with various attendees or mobile device via imaging of a surrounding environment …” (¶ [0066]) where the imaging may be multiple images of the venue; comparing the second image to a spatial representation of the venue “… from which at least approximate location can be discerned by comparison to reference images of the venue …” (¶ [0066]) where the reference images act as a special representation to be compared with; and determining the initial position of the respective device based on the comparison “… from which at least approximate location can be discerned by comparison to reference images of the venue. Such may be performed, for example by the distribution or assignment system 140.” (¶ [0066]) where the assignment system may process photos to determine the device location automatically. Regarding claim 8, Snyder teaches [t]he method of claim 1, wherein the tracking the position of the respective device relative to the initial position further comprises (¶ [0066]): tracking changes in at least one of an acceleration, a rotational position, or an orientation of the respective device “… additionally, the respective venue location information associated with various attendees or mobile devices may be identified or derived via user input, for instance by the user entering venue identification (e.g., name), event identification (e.g., name, date, time), seat or position information (e.g., section, row and/or seat information) via a keypad or virtual keyboard of the mobile device. Alternatively or additionally, the mobile device may identify or collect the respective location information associated with various attendees or mobile device via a global positioning system receiver or via triangulation” (¶ [0066] where the use of GPS or other tracking methods involves movement or acceleration. Regarding claim 9, Snyder teaches [t]he method of claim 1, wherein: the analyzing the captured image further comprises comparing the captured image to a spatial representation of the venue; and “Alternatively or additionally, the mobile device may identify or collect the respective location information associated with various attendees or mobile device via imaging of a surrounding environment, from which at least approximate location can be discerned by comparison to reference images of the venue. Such may be performed, for example by the distribution or assignment system 140.” (Snyder, ¶ [0066]). the performing the recalibration of the current position includes updating the current position of the respective device within the venue of the live event based on the comparison “Attendees may register their respective mobile devices with the distribution or assignment system 140, for example via a downloaded application, commonly referred to as “apps” or via a Website. For example, an attendee may actuate their mobile device to provide their respective venue location information to the distribution or assignment system 140” (Snyder, ¶ [0086]) where the assignment system reads the location information which is provided as a photo of the venue in paragraph 66, and compares it to the representation of the venue, and updates the location accordingly. Regarding claim 10, Snyder teaches [t]he method of claim 1, wherein the performing recalibration of the current position by causing the respective device to capture the image further comprises: monitoring a stability level of the respective device, wherein the stability level is based on at least one of an acceleration, a rotational position, or an orientation of the respective device “ At the event, the mobile device may receive or sense one or more trigger or synchronization signals 134. In response, the mobile devices 160 operate according to their respective transducer activation sequences. When displayed in the air, in conjunction with other mobile devices 160 in the ad hoc group or set, the cumulative effect is a visual and/or audio show, which may extend across all or a portion of the venue during the event.” (¶ [0089]) where receiving or sensing a trigger or synchronization signal is synonymous to monitoring a stability level. determining that the stability level indicates that the device is stable “… the application transmits and receives all required data or information to allow the attendee and their mobile device to participate as described above. Other, non-exclusive, possibilities include receiving data associated with the event or venue from a server. Such may, for example, include receiving photographs or images from a band playing at a concert, which may be in real- or almost real-time.” (¶ [0089]) after displaying the user device, the application assesses data required to see if the device is active and able to participate, or stable, an image may be used to discern stability; and causing the respective device to capture the image based on determining that the stability level of the respective device indicates that the device is stable “Photographs or images may be received together with one or more transducer activation sequences” (¶ [0089]) where the transducer activation sequence may cause the device to capture a photo after determining the device is available, or stable Regarding claim 11, Snyder teaches [t]he method of claim 1, wherein the causing the plurality of devices to output the signals that are coordinated to form the visual effect further comprises: monitoring a stability level of the respective device, wherein the stability level is based on at least one of an acceleration, a rotational position, or an orientation of the respective device “attendees may participate in an event by displaying their mobile devices, which are controlled to produce a light, sound or light and sound presentation or show via the respective displays and/or speakers of the mobile devices” (¶ [0061]) “When displayed in the air, in conjunction with other mobile devices … attendees may participate in and enhance an event at a venue, as well as enhancing their own experience of the event.” (¶ [0089]) “In response to registration, the application transmits and receives all required data or information to allow the attendee and their mobile device to participate as described above” (¶ [0089]) where displaying a mobile device is considered stable, or available and participating, and not displaying the device is not stable; and based on the monitoring of the stability level, determining whether the respective device is in an active state or an inactive state, wherein the active state indicates that the respective device is currently available to output the signals, and wherein an inactive state indicates that the respective device is currently unavailable to participate in output of the signals that are coordinated to form the visual effect “The at least one processor unit may provide a grouping tool as part of the user interface, operation of which groups two or more seats as a single separable addressable unit in the light show, the sound show or the light and sound show, based at least in part on a respective attendee active participation status of the seats indicative of whether an attendee logically associated with a seat is actively participating in the light show, the sound show or the light and sound show, and the at least one processor unit may generate a respective set of instructions for each respective group of seats in the venue” (¶ [0007]) where displaying the device is active or stable and participating. Regarding claim 12, Snyder teaches [t]he method of claim 11, wherein the causing the plurality of devices to output the signals that are coordinated to form the visual effect further comprises: determining a density of a section of the venue, wherein the density specifies a number of devices associated with the active state that are concentrated within the section of the venue “The authoring system 120 may use the participant information in the mapping the pixels to respective locations. For example, the authoring system 120 may group multiple seats as an effective pixel based on a relative density of participation in a general location in the venue.” (¶ [0081]) where more participants in a section makes a denser section. selecting the visual effect based on the density “participation may be higher closer to the floor or stage than in more remote areas. In response, the authoring system 120 may employ a 1:1 mapping between pixels and seats on the floor of the venue, while employing a 1:8 mapping between pixels and seats in an upper deck level of the same venue for the same event” (¶ [0081]) where a more defined image mapping is possible with more pixels or participants, and the more defined visual effect is selected based on density detected. Regarding claim 13, Snyder teaches [t]he method of claim 12 further comprising: based on detecting a change in the density of the section of the venue, determining an updated density for the section of the venue “participation information may represent actual participation, collected in real- or almost real-time form information provided via the applications executing on the various mobile devices” (¶ [0080]) where participation is relevant to a location or section, and more participation indicates a higher density; selecting a different visual effect based on the updated density “… the authoring system 120 may move or position various effects based in indicated participation rate at various locations in the venue” (¶ [0081]) where the change in density is the participation rate, which may change as an event goes on, and the authoring system will alter effects accordingly; and outputting, on devices specified as having the active state in association with the updated density in the section of the venue, signals that are coordinated to form the different visual effect “Based on the user input, the venue layout, and optionally on participation information, the authoring system 120 generates sets of instructions for each of a plurality of locations in the venue. The instructions specify when and how to actuate a transducer of a mobile device located at the respective location to produce the defined visual or aural effect or show. The locations may be individual seats or may be a group or set of seats” (¶ [0082]) where the participation information includes devices in an active state based on the plurality of locations at a venue, the updated density is the real time participation information, and the different visual effect is the defined visual or aural effect. Regarding claim 16, Snyder teaches [t]he method of claim 1, further comprising: determining a characteristic corresponding to a user of the respective device “the respective generated sets of instructions may be downloaded to a mobile device when the mobile device is used to purchase tickets to the event and the seating is determined.” (¶ [0067]) where the location is determined specifically to the user of the device; and modifying the signals outputted by the respective device based on the determined characteristic “Instructions may also be downloaded to the mobile device when the event is known and before a seat or the seating is allocated by downloading sets of instructions for the seating throughout the venue for the event. This allows automatic correlation of the mobile device with a respective venue location and the respective output sequence logically associated with that respective venue location of the attendee or mobile device.” (¶ [0067]) where the output can be downloaded based on location within the venue, for example, the home colors for the home section and vice versa. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 14, 15, & 18 are rejected under 35 U.S.C. 103 as being unpatentable over Snyder, and further in view of Maik Andre Lindner (U.S. Pat. Pub. App. No. US-20180063803-A1, herein after “Lindner”. Regarding claim 14, Snyder teaches [t]he method of claim 13, further comprising: Snyder does not explicitly teach based on determining that the respective device has changed from the active state to the inactive state, transmitting a notification to the respective device prompting the at least one device to participate in output of the signals that are coordinated to form the visual effect. Lindner teaches based on determining that the respective device has changed from the active state to the inactive state, transmitting a notification to the respective device prompting the at least one device to participate in output of the signals that are coordinated to form the visual effect “If the user closes their mobile device while on this screen the application will notify the user when the Event 101 is within seconds of starting so they can open their mobile device and position the device for experience” (Lindner, ¶ [0038]) where the device being turned off is considered inactive. It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of involving a group of devices to create a group display event by Snyder with the method of system notifications taught by Lindner to ensure participation. The suggestion/motivation to do so would have been to remind users to participate and engage. More participation would improve the user experience, and allow for advertising or monetization (Lindner, ¶ [0041]). Regarding claim 15, Snyder teaches [t]he method of claim 1, further comprising: Snyder does not explicitly teach outputting, at the respective device, an indication directing a user to position or move the respective device in a particular manner to form the visual effect. Lindner teaches outputting, at the respective device, an indication directing a user to position or move the respective device in a particular manner to form the visual effect “Additional information can also be requested from the user on this page e.g., Location 102, Scene 103, Pattern 110, or Audio 111” (Lindner, ¶ [0037]) where the user can request information such as event patterns or locations to form the visual effect. PNG media_image2.png 583 529 media_image2.png Greyscale (Lindner, Figure 4) The image above (figure 4e, 107) shows instructions outputted to the user via the display screen, alongside advertisements. These instructions may include a location, scene, pattern, or audio (Lindner, ¶ [0037]). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of involving a group of devices to create a group display event by Snyder with the method of instructing a user to move the device taught by Lindner to create a more engaging display. The suggestion/motivation to do so would have been to improve the visual or draw more participants in by encouraging others to become involved in the group display (Lindner, ¶ [0041]). Regarding claim 18, Snyder teaches [t]he method of claim 1, further comprising: Snyder does not explicitly teach identifying device capabilities corresponding to the respective device; and selecting the signals to output on the respective device based on the device capabilities corresponding to the respective device Lindner teaches identifying device capabilities corresponding to the respective device “Several other functional elements are connected to the Processor 202 by the Bus 203 that can help establish interaction with the group experience in real-time. These include but are not limited to the Accelerometer 206, Microphone 207, Cell Transceiver 208, WiFi Transceiver 209, GPS Receiver 210, Bluetooth Transceiver 211, Nearfield Transceiver 212, Clock/Timer 213. Additional elements include the Display 214, Touch Screen 215, Flash 216, Speaker(s) 217, Thermometer 218, Vibrator 219, Auxiliary I/O 220, and Camera 221” (Lindner, ¶ [0039]) where all the functional elements may or may not be used for the event, but additional elements can be included, detecting these attributes allows for establishing capabilities “they help establish or modify the Event 101 attributes based on detecting information that can be interpreted for attribute values” (Lindner, ¶ [0039]); and selecting the signals to output on the respective device based on the device capabilities corresponding to the respective device “GPS Satellites 252, Cell Tower Transceivers 253, WiFi Transceivers 254, Proximity Beacons 255, Voice Recognition 256, Noise Levels 257, Information from other Mobile Devices 258, Audio Speakers 259, Photographic Flash 260, and Touch Screen Input 261. This complex network of stimulation sources can be used to gather timing and other Event 101 attributes” (Lindner, ¶ [0040]) where the available network and stimulation sources can be selected to enrich group event participation. It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of involving a group of devices to create a group display event by Snyder with the method of identifying device capabilities taught by Lindner to assess what requirements of the group display the device is capable of. The suggestion/motivation to do so would have been to use outputs the device is capable of to improve the group display visuals, and include more methods of output based on availability (Lindner, ¶ [0040]). Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Snyder in view of Andrew Tatrai (U.S. Pat. Pub. App. No. WO-2021011992-A1, herein after “Tatrai”). Regarding claim 17, Snyder teaches [t]he method of claim 1, wherein: the plurality of devices are associated with a plurality of spectators at the venue “… a plurality of sets of instructions executable by each of a plurality of mobile devices possessed by respective ones of a plurality of attendees of at least one event at the selected one of the venues…” (Snyder, ¶ [0006]); determining the initial position of the respective device further comprises “The at least one circuit may include at least one processor unit, and may be communicatively coupled to receive location specification information include geolocation coordinates derived by a geolocation system.” (Snyder, ¶ [0029]): Snyder does not explicitly teach one or more cameras in the venue are configured to capture a plurality of images of the plurality of spectators; causing the respective device to output an initial output, wherein the initial output comprises a pattern unique to the respective device; identifying portions of the plurality of images which depict the initial output; and determining the initial position of the respective device with respect to the plurality of devices based at least in part on the portions of the plurality of images which depict the initial output. Tatrai teaches one or more cameras in the venue are configured to capture a plurality of images of the plurality of spectators “The system 102 includes a data collection module 108 includes a number of data capturing devices 110A-110N installed in a number of zones 104A-104N, respectively.” (Tatrai, ¶ [0043]) additionally, “The data capturing devices 214A-214N may include camera, CCTV, web cameras, video cameras, and so forth that are installed in locations where more people or crowd is expected …” (Tatrai, ¶ [0060]); causing the respective device to output an initial output, wherein the initial output comprises a pattern unique to the respective device “the data capturing devices 110A-110N may collect input data (i.e. the crowd data or data about an individual) using different remote (and/or non-contact) sensing technologies.” (Tatrai, ¶ [0046]) where the input data can be about an individual, and includes remote sensing technologies, which are not exclusive of pattern recognition; identifying portions of the plurality of images which depict the initial output “The system 102 may use IoT models for the data capturing devices 1 lOA-110N provisioning and data aggregation.” (Tatrai, ¶ [0045]) where the data capturing devices would be capable of identifying a unique output, or the initial output; and determining the initial position of the respective device with respect to the plurality of devices based at least in part on the portions of the plurality of images which depict the initial output “The CNNs are mathematical models that may allow the capturing and learning of complex features in images (i.e. the captured crowd data) to understand scenes, and extract useful information in an autonomous manner. The CNN may be used for object detection to recognize patterns such as edges (vertical/horizontal), shapes, colours, and textures in the crowd data. Further, the CNN may filter the crowd data, and transform the crowd data by using a specific pattern/feature.” (Tatrai, ¶ [0042]) where the use of convolutional neural networks allows for the detection of patterns. The initial output pattern, and its location would then be detected by the CNN and its position can be determined. The system may also output crowd data, such as location (Tatrai, ¶ [0056]). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the methods of considering crowd location to display a group image taught by Snyder with the teachings of Tatrai to detect the location of an individual within a crowd, based on a specific output pattern. The suggestion/motivation to do so would have been to monitor the individual’s location. “The system can also be used to monitor and manage a flow and mood of the crowd without actual facial recognition or any personal characteristics recognition of individuals in the crowd.” (Tatrai, ¶ [00128]) monitoring the flow or movement of an individual is the purpose of observation. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CAIDEN ALEXANDER USSERY whose telephone number is (571)272-1192. The examiner can normally be reached Monday - Friday* 7:30AM - 5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard can be reached at (571) 272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /C.A.U./Examiner, Art Unit 2611 /TAMMY PAIGE GODDARD/Supervisory Patent Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

May 31, 2024
Application Filed
Jan 15, 2026
Non-Final Rejection — §102, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month