Prosecution Insights
Last updated: April 19, 2026
Application No. 18/635,267

SYSTEM AND METHOD FOR A GUEST TO INTERACT WITH AN INTERACTIVE AREA

Non-Final OA §101§103
Filed
Apr 15, 2024
Examiner
VU, THANH T
Art Unit
2179
Tech Center
2100 — Computer Architecture & Software
Assignee
Universal City Studios LLC
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
3y 6m
To Grant
91%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
464 granted / 623 resolved
+19.5% vs TC avg
Strong +16% interview lift
Without
With
+16.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
19 currently pending
Career history
642
Total Applications
across all art units

Statute-Specific Performance

§101
7.2%
-32.8% vs TC avg
§103
47.1%
+7.1% vs TC avg
§102
17.6%
-22.4% vs TC avg
§112
16.1%
-23.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 623 resolved cases

Office Action

§101 §103
DETAILED ACTION Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an attract idea without significantly more. Claims 1, 15 and 20 an interactive area comprising: a display…; a recognition and tracking system… and a controller; A method for a guest to interact with an interactive area, the method comprising…; and a system for guests to interact with an interactive area, the system comprising…, which fall within a statutory category under Step 1. The claims recite the limitations of a queue for an attraction; a recognition and tracking system configured to recognize or to identify a guest upon entering the queue and to track the guest as they move along the path; and to exhibit/display a signage that moves (approximately) with the guest as the guest moves along the path as in claims 1, 15, and 20. These limitations under its broadest reasonable interpretation, recite the abstract idea of certain methods of organizing human activity (in particular managing personal behavior or relationships or interactions between people including social activities, teaching, and following rules or instructions (i.e. managing a queue of guests by tracking and identifying the guests and holding a signature with certain information to inform the guests)). If the claim under broadest reasonable interpretation covers limitation that is drawn to certain methods of organizing human activity but for recitation of a generic computer and/or generic computer components described at a high level of generality or linking the use of the judicial exception to a particular technological environment or field of use, then it falls within the grouping of abstract ideas. Thus, these limitations recite and fall within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas under Step 2A Prong I. The claims further recite the additional limitations of “an interactive area”, “a first display”, “a display”, “a controller coupled to the recognition and tracking system, wherein the controller is configured to cause the first display to exhibit”, “displaying, in response to control signals from a controller coupled to the recognition and tracking system, a user interface on a display” and “one or more devices that enable” as in claims 1, 15 and 20. However, the additional limitations of “an interactive area”, “a first display”, “a display”, “a controller coupled to the recognition and tracking system, wherein the controller is configured to cause the first display to exhibit”, “displaying, in response to control signals from a controller coupled to the recognition and tracking system, a user interface on a display” and “one or more devices that enable” are recited at a high-level of generality such that it amounts to mere instructions to implement the abstract idea on a generic computer and/or adding the words “apply it”(or an equivalent) with the judicial exception and/or uses a generic computer or generic computer components as a tool to perform the abstract idea. see MPEP 2106.05(f). Accordingly, these additional elements do not integrate the recited judicial exception into a practical application, and the claims are therefore directed to the judicial exception under Step 2A Prong II. These claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, there are insignificant additional elements including “an interactive area”, “a first display”, “a display”, “a controller coupled to the recognition and tracking system, wherein the controller is configured to cause the first display to exhibit”, “displaying, in response to control signals from a controller coupled to the recognition and tracking system, a user interface on a display” and “one or more devices that enable” as in claims 1, 15 and 20. However, the additional limitations of “an interactive area”, “a first display”, “a display”, “a controller coupled to the recognition and tracking system, wherein the controller is configured to cause the first display to exhibit”, “displaying, in response to control signals from a controller coupled to the recognition and tracking system, a user interface on a display” and “one or more devices that enable” are recited at a high-level of generality such that it amounts to mere instructions to implement the abstract idea on a generic computer and/or adding the words “apply it”(or an equivalent) with the judicial exception and/or uses a generic computer or generic computer components as a tool to perform the abstract idea. see MPEP 2106.05(f). Accordingly, the additional limitations is/are at best applying/mere instructions to implement an abstract idea on a computer which is/are insignificant and not indicative of integration into a practical application. See MPEP 2106.05(f). Mere instructions to apply the exception cannot provide an inventive concept. Accordingly, the claim does not appear to be patent eligible under 35 USC 101 under Step 2B. Per claims 1-14, 16-19 and 20-40, these claims are also similar rejected for being directed to an attract idea of “Certain Methods of Organizing Human Activity” without significantly more as detailed in the rejection of independent claims 1, 15 and 20 above. In particular, there are additional elements in these claims however these additional elements are at best the equivalent of displaying and exhibiting on a display via a processor/controller is/are at best applying/mere instructions to implement an abstract idea on a computer which is/are insignificant and not indicative of integration into a practical application. See MPEP 2106.05(f). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-4, 6-12, 14-18, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Karafin et al. (“Karafin”, Pub. No. US 2020/0384371) and Holguin et al. (“Holguin”, Pub. No. US 2023/0305661). Per claim 1, Karafin teaches an interactive area comprising: a first display disposed along a path for a queue for an attraction (fig. 7; [0183]; which show a display 720 is disposed along a path for a queue of viewers 705, 710 and 715); a recognition and tracking system configured to recognize or to identify a guest upon entering the queue and to track the guest as they move along the path ([0185]… the tracking system is configured to track responses of a viewer of the one or more viewers to the presented holographic content within the viewing volume and the controller of the LF display system 700 is configured to update the presented holographic content based on the tracked response (e.g., a position of the viewer, a movement of the viewer, a gesture of the viewer, an expression of the viewer, a gaze of the viewer, and auditory feedback of the viewer, some other tracked response, or some combination thereof). The tracking system tracks movement of viewers in the viewing volume of the LF display system 700. The tracking system may make use of viewers' body movements for rendering new holographic content. [0186]…the viewer profiling module is configured to identify a viewer of the one or more viewers within the viewing volume and generate a viewer profile for the viewer. The controller of the LF display system 700 generates the holographic content for the viewer based in part on the viewer profile. In other embodiments, the controller uses the viewer profile and an AI model to generate the holographic content. The viewer profiling module may include sensors for identifying the viewers as they wait in the queue. These sensors may include facial recognition scanners or card identification scanners. [0187]-[0189]…Upon accessing the viewer profile, the LF display system 700 presents holographic content that includes personalized holographic content, amusement park ride suggestions, an amusement park ride wait time, or some combination thereof…The LF display system 700 may utilize the viewer profiling module to personalize holographic content to the viewer during each subsequent visit to the amusement park ride. For example, the LF display system 700 addresses the viewer by name (e.g., visually or audio-wise)); and a controller coupled to the recognition and tracking system, wherein the controller is configured to cause the first display to exhibit a user interface as the guest moves along the path ([0185]… the tracking system is configured to track responses of a viewer of the one or more viewers to the presented holographic content within the viewing volume and the controller of the LF display system 700 is configured to update the presented holographic content based on the tracked response (e.g., a position of the viewer, a movement of the viewer, a gesture of the viewer, an expression of the viewer, a gaze of the viewer, and auditory feedback of the viewer, some other tracked response, or some combination thereof). The tracking system tracks movement of viewers in the viewing volume of the LF display system 700. The tracking system may make use of viewers' body movements for rendering new holographic content. [0186]…the viewer profiling module is configured to identify a viewer of the one or more viewers within the viewing volume and generate a viewer profile for the viewer. The controller of the LF display system 700 generates the holographic content for the viewer based in part on the viewer profile. In other embodiments, the controller uses the viewer profile and an AI model to generate the holographic content. The viewer profiling module may include sensors for identifying the viewers as they wait in the queue. These sensors may include facial recognition scanners or card identification scanners. [0187]-[0189]…Upon accessing the viewer profile, the LF display system 700 presents holographic content that includes personalized holographic content, amusement park ride suggestions, an amusement park ride wait time, or some combination thereof…The LF display system 700 may utilize the viewer profiling module to personalize holographic content to the viewer during each subsequent visit to the amusement park ride. For example, the LF display system 700 addresses the viewer by name (e.g., visually or audio-wise)). Although Karafin teaches providing the viewers with personalized content as the viewers moving along the path, Karafin does not specifically teach cause the first display to exhibit a user interface that moves with the guest along the first display. However, Holguin teaches causing the first display to exhibit a user interface that moves with the guest along the first display ([0026]… certain embodiments provide improvements in graphics processing by automatically applying various rules of a particular type, such as user interface positioning constraints, to control the manner in which computing devices dynamically generate user interfaces for presentation on an interactive display device (e.g., and/or dynamically modify a current position, size, or configuration of one or more user interfaces or interface elements. [0063]…the system is configured to receive input data in each of a plurality of defined zones (e.g., as shown in FIG. 11) on the interactive display device and track a number of inputs provided in each zone. The system may, for example, define one or more zones 422, 424, 426, 428 and track user inputs within each zone. The system may then determine that a user has worked exclusively and/or primarily in a particular zone for at least a particular length of time, and then translate (e.g., modify a lateral position of) one or more interface features or elements toward that zone in response. [0074]… the system is configured to substantially continuously (e.g., continuously) modify the user interface elements as the user moves back and forth in front of the interactive display device. In still other embodiments, the system is configured to modify the lateral position of the user interface elements after a particular length of time following movement of the user (e.g., by biasing the one or more user interface elements toward the lateral position of the user after a period of time, gradually over a period of time, etc.)). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the teaching of Holguin in the invention of Karafin to provide dynamic generation of user interface as the user moving along the interact display because doing so would improve user interface generation and improve the efficiency of a user using/interacting the interactive display. Per claim 2, the modified Karafin teaches the interactive area of claim 1, wherein the controller is configured to cause the user interface to move proximate to the guest, for a period of time along the first display as the guest moves along the path (Holguin, [0063]…The system may then determine that a user has worked exclusively and/or primarily in a particular zone for at least a particular length of time, and then translate (e.g., modify a lateral position of) one or more interface features or elements toward that zone in response). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the teaching of Holguin in the invention of modified Karafin to provide dynamic generation of user interface as the user moving along the interact display because doing so would improve user interface generation and improve the efficiency of a user using/interacting the interactive display. Per claim 3, the modified Karafin teaches the interactive area of claim 1, wherein the controller is configured to cause the user interface to intermittently appear along the first display proximate to the guest as the guest moves along the path (Holguin, [0063]…the system is configured to receive input data in each of a plurality of defined zones (e.g., as shown in FIG. 11) on the interactive display device and track a number of inputs provided in each zone. The system may, for example, define one or more zones 422, 424, 426, 428 and track user inputs within each zone. The system may then determine that a user has worked exclusively and/or primarily in a particular zone for at least a particular length of time, and then translate (e.g., modify a lateral position of) one or more interface features or elements toward that zone in response. [0074]… the system is configured to substantially continuously (e.g., continuously) modify the user interface elements as the user moves back and forth in front of the interactive display device. In still other embodiments, the system is configured to modify the lateral position of the user interface elements after a particular length of time following movement of the user (e.g., by biasing the one or more user interface elements toward the lateral position of the user after a period of time, gradually over a period of time, etc.) It is noted that the user can jump from one zone to another (e.g. move from one zone 422 to another zone 426) of fig. 11, thus, providing intermittent display of user interface 415 along the display screen instead of continuously moving from one zone to another as shown in figs. 6-8). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the teaching of Holguin in the invention of modified Karafin to provide dynamic generation of user interface as the user moving along the interact display because doing so would improve user interface generation and improve the efficiency of a user using/interacting the interactive display. Per claim 4, the modified Karafin teaches the interactive area of claim 3, wherein the controller is configured to cause the user interface to appear proximate to the guest along the first display only when the guest is stationary along the path (Holguin, [0063]… The system may then determine that a user has worked exclusively and/or primarily in a particular zone for at least a particular length of time (i.e. stationary in a zone for a particular length of time), and then translate (e.g., modify a lateral position of) one or more interface features or elements toward that zone in response.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the teaching of Holguin in the invention of modified Karafin to provide dynamic generation of user interface as the user moving along the interact display because doing so would improve user interface generation and improve the efficiency of a user using/interacting the interactive display. Per claim 6, the modified Karafin teaches the interactive area of claim 1, wherein the controller is coupled to the first display, and wherein the first display comprises a touchscreen configured to enable the guest to directly interact with the user interface (Karafin, [0047]… one or more acoustic projection devices may be used to (1) generate a tactile surface that is collocated with a surface of the dolphin such that viewers may touch the holographic object. [0165]… The sensory feedback system 570 may provide tactile feedback by providing a tactile surface coincident with a surface of the holographic character that the one or more viewers may interact with via touch. Holguin, [0037]… the one or more interactive display devices 110 comprise one or more resistive touch screen displays (e.g., one or more 5-wire resistive touch screen displays), one or more surface capacitive touch screen displays, one or more projected capacitive touch screen displays, one or more surface acoustic wave touch screen displays, one or more infrared touch screen displays). Per claim 7, the modified Karafin teaches the interactive area of claim 1, further comprising a projection system configured to project the user interface on the first display, wherein the controller is coupled to the projection system and is configured to cause display of the user interface on the first display via signals sent to the projection system (Karafin, [0082]…the array 410 may project one or more holographic objects. For example, in the illustrated embodiment, the array 410 projects a holographic object 420 and a holographic object 422. [0089]… the LF display system 400 may include one or more acoustic projection devices integrated into the array 410 as described herein. The acoustic projection devices may consist of an array of ultrasonic sources configured to project a volumetric tactile surface. In some embodiments, the tactile surface may be coincident with a holographic object (e.g., at a surface of the holographic object 420) for one or more surfaces of a holographic object if a portion of a viewer gets within a threshold distance of the one or more surfaces. The volumetric tactile sensation may allow the user to touch and feel surfaces of the holographic object). Per claim 8, the modified Karafin, teaches the interactive area of claim 7, wherein the projection system comprises both a plurality of projectors and one or more reflective surface disposed behind the first display along the path, wherein the plurality of projectors are configured to cause the user interface to be projected on the one or more reflective surface, and the one or more reflective surface is configured to reflect the user interface on the first display (Karafin, [0005]… the LF display system includes LF display modules that form a surface (e.g., wall, ceiling, floor, control panel, etc.) on the amusement park ride. In some embodiments, the LF display system includes LF display modules placed on either or both sides of an amusement park queue. [0082]…the array 410 may project one or more holographic objects. For example, in the illustrated embodiment, the array 410 projects a holographic object 420 and a holographic object 422. [0089]… the LF display system 400 may include one or more acoustic projection devices integrated into the array 410 as described herein. The acoustic projection devices may consist of an array of ultrasonic sources configured to project a volumetric tactile surface. In some embodiments, the tactile surface may be coincident with a holographic object (e.g., at a surface of the holographic object 420) for one or more surfaces of a holographic object if a portion of a viewer gets within a threshold distance of the one or more surfaces. The volumetric tactile sensation may allow the user to touch and feel surfaces of the holographic object). Per claim 9, the modified Karafin teaches the interactive area of claim 1, wherein the recognition and tracking system comprises a plurality of directional microphones disposed along the path (Karafin, [0047]…An acoustic receiving device (e.g., a microphone or microphone array) may be configured to monitor ultrasonic and/or audible pressure waves within a local area of the LF display module 210. [0115]… In some embodiments, the sensory feedback system 570 is configured to receive input from viewers of the LF display system 500. In this case, the sensory feedback system 570 includes various sensory feedback devices for receiving input from viewers. The sensor feedback devices may include devices such as acoustic receiving devices (e.g., a microphone). [0116]… To illustrate, in an example embodiment of a light field display assembly, a sensory feedback system 570 includes a microphone. The microphone is configured to record audio produced by one or more viewers (e.g., gasps, screams, laughter, etc.).). Per claim 10, the modified Karafin teaches the interactive area of claim 9, wherein the plurality of directional microphones are configured to enable the guest to interact audibly with the user interface as the guest moves along the path (Karafin, [0047]…An acoustic receiving device (e.g., a microphone or microphone array) may be configured to monitor ultrasonic and/or audible pressure waves within a local area of the LF display module 210. [0115]… In some embodiments, the sensory feedback system 570 is configured to receive input from viewers of the LF display system 500. In this case, the sensory feedback system 570 includes various sensory feedback devices for receiving input from viewers. The sensor feedback devices may include devices such as acoustic receiving devices (e.g., a microphone). [0116]… To illustrate, in an example embodiment of a light field display assembly, a sensory feedback system 570 includes a microphone. The microphone is configured to record audio produced by one or more viewers (e.g., gasps, screams, laughter, etc.).). Per claim 11, the modified Karafin teaches the interactive area of claim 9, comprising an input device configured to be handled by the guest to provide input to the user interface as the guest moves along the path (Karafin, [0138]…The viewer profiling module may also utilize card identification scanners, voice identifiers, a radio-frequency identification (RFID) chip scanners, barcode scanners, etc. to positively identify a viewer. In one example, viewers may be given a barcode on a wristband. Paired with a barcode scanner, the viewer profiling module may positively identify the viewer waiting in the queue. In another example, viewers may be given a RFID chip that can then be scanned with a RFID scanner to positively identify the viewer waiting in the queue.) Per claim 12, the modified Karafin teaches the interactive area of claim 1, wherein the recognition and tracking system comprises a plurality of radio frequency identification (RFID) readers disposed along the path and configured to recognize or to identify the guest and to track the guest via an RFID tag in a device associated with the guest (Karafin, [0138]…The viewer profiling module may also utilize card identification scanners, voice identifiers, a radio-frequency identification (RFID) chip scanners, barcode scanners, etc. to positively identify a viewer. In one example, viewers may be given a barcode on a wristband. Paired with a barcode scanner, the viewer profiling module may positively identify the viewer waiting in the queue. In another example, viewers may be given a RFID chip that can then be scanned with a RFID scanner to positively identify the viewer waiting in the queue.) Per claim 14, the modified Karafin teaches the interactive area of claim 1, wherein the recognition and tracking system is configured to recognize or to identify a plurality of guests upon entering the queue and to track each guest as they move along the path, and wherein the controller is configured to display a user interface on the first display for each guest, and wherein each user interface moves with its respective guest along the first display as each guest moves along the path (Karafin, [0138]…The viewer profiling module may also utilize card identification scanners, voice identifiers, a radio-frequency identification (RFID) chip scanners, barcode scanners, etc. to positively identify a viewer. In one example, viewers may be given a barcode on a wristband. Paired with a barcode scanner, the viewer profiling module may positively identify the viewer waiting in the queue. In another example, viewers may be given a RFID chip that can then be scanned with a RFID scanner to positively identify the viewer waiting in the queue. [0186]…the viewer profiling module is configured to identify a viewer of the one or more viewers within the viewing volume and generate a viewer profile for the viewer. The controller of the LF display system 700 generates the holographic content for the viewer based in part on the viewer profile. In other embodiments, the controller uses the viewer profile and an AI model to generate the holographic content. The viewer profiling module may include sensors for identifying the viewers as they wait in the queue. These sensors may include facial recognition scanners or card identification scanners. [0187]-[0189]…Upon accessing the viewer profile, the LF display system 700 presents holographic content that includes personalized holographic content, amusement park ride suggestions, an amusement park ride wait time, or some combination thereof…The LF display system 700 may utilize the viewer profiling module to personalize holographic content to the viewer during each subsequent visit to the amusement park ride. For example, the LF display system 700 addresses the viewer by name (e.g., visually or audio-wise); Holguin, [0026]… certain embodiments provide improvements in graphics processing by automatically applying various rules of a particular type, such as user interface positioning constraints, to control the manner in which computing devices dynamically generate user interfaces for presentation on an interactive display device (e.g., and/or dynamically modify a current position, size, or configuration of one or more user interfaces or interface elements. [0063]…the system is configured to receive input data in each of a plurality of defined zones (e.g., as shown in FIG. 11) on the interactive display device and track a number of inputs provided in each zone. The system may, for example, define one or more zones 422, 424, 426, 428 and track user inputs within each zone. The system may then determine that a user has worked exclusively and/or primarily in a particular zone for at least a particular length of time, and then translate (e.g., modify a lateral position of) one or more interface features or elements toward that zone in response. [0074]… the system is configured to substantially continuously (e.g., continuously) modify the user interface elements as the user moves back and forth in front of the interactive display device. In still other embodiments, the system is configured to modify the lateral position of the user interface elements after a particular length of time following movement of the user (e.g., by biasing the one or more user interface elements toward the lateral position of the user after a period of time, gradually over a period of time, etc.)) Claims 15-18 are rejected under the same rationale as claims 1-3 and 6 respectively. Claim 20 is rejected under the same rationale as claim 1. Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Karafin et al. (“Karafin”, Pub. No. US 2020/0384371), Holguin et al. (“Holguin”, Pub. No. US 20230305661), and Bruno et al. (“Bruno”, Pub. No. US 2020/0223360). Per claim 5, the modified Karafin teaches the interactive area of claim 1, comprising the viewer profiling module may include sensors for identifying the viewers as they wait in the queue and providing personalized holographic content/user interface during a transition of the guest along the path as described above by Karafin and Holguin. The modified Karafin does not teach wherein a second display is disposed along the path for the queue for the attraction, the second display is both separate from and spaced apart from the first display, and the controller is configured to cause the second display to exhibit the user interface previously displayed on the first display during a transition of the guest along the path from a position proximate to the first display to a position proximate to the second display. However, Bruno teaches the second display is both separate from and spaced apart from the first display, and the controller is configured to cause the second display to exhibit the user interface ([0021]… While the illustrated embodiment includes four transparent displays 32, other embodiments may include more or fewer transparent displays 32 (e.g., one transparent display 32, two transparent displays 32, five transparent displays 32, ten transparent displays 32, or any other suitable number of transparent displays 32. [0031]… the transparent displays 32 are generally parallel to the ride path 16.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the teaching of Bruno in the invention of modified Karafin to provide plurality of display along a path because doing so would enhance the user’s experience for presenting personalized content/user interface the viewers in the queue. Claim(s) 13 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Karafin et al. (“Karafin”, Pub. No. US 2020/0384371), Holguin et al. (“Holguin”, Pub. No. US 20230305661), and Graefe et al. (“Graefe”, Pub. No. US 2021/0110706). Per claim 13, the modified Karafin teaches the interactive area of claim 1, comprising walkway dispose along the path (Karafin, [0179]…which show a walkway for viewers for move), but but does not teach comprising a moving walkway. However, Graefe clearly teach moving walkway ([0010]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the teaching of Graefe in the invention of the modified Karafin to provide a moving walkway for guests along a path because doing so would enhance the user’s experience using the transportation resource. Claim 19 is rejected under the same rationale as claim 13. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Mason (Pat. No. US 9,471,192) discloses region dynamic for large digital whiteboard. Inquiries Any inquiry concerning this communication or earlier communications from the examiner should be directed to THANH T VU whose telephone number is (571)272-4073. The examiner can normally be reached M-F: 7AM - 3:30PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fred Ehichioya can be reached at (571) 272-4034. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /THANH T VU/ Primary Examiner, Art Unit 2179
Read full office action

Prosecution Timeline

Apr 15, 2024
Application Filed
Feb 12, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602152
SYSTEMS AND METHODS TO PROVIDE PERSONALIZED GRAPHICAL USER INTERFACES WITHIN A COLLABORATION ENVIRONMENT
2y 5m to grant Granted Apr 14, 2026
Patent 12591352
INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12579358
SUPPLEMENTAL CONTENT AND GENERATIVE LANGUAGE MODELS
2y 5m to grant Granted Mar 17, 2026
Patent 12572262
COMMUNICATION APPARATUS, IMAGE GENERATION SYSTEM, CONTROL METHOD OF COMMUNICATION APPARATUS, CONTROL METHOD OF IMAGE GENERATION SYSTEM, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 10, 2026
Patent 12572324
SYSTEMS AND METHODS FOR DISPLAYING SUBJECTS OF AN AUDIO PORTION OF CONTENT AND SEARCHING FOR CONTENT RELATED TO A SUBJECT OF THE AUDIO PORTION
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
91%
With Interview (+16.5%)
3y 6m
Median Time to Grant
Low
PTA Risk
Based on 623 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month