Prosecution Insights
Last updated: April 19, 2026
Application No. 18/794,936

APPARATUS FOR IMPLEMENTING IN-PERSON TEACHING AND ONLINE TEACHING

Non-Final OA §102§103
Filed
Aug 05, 2024
Examiner
ANWAH, OLISA
Art Unit
2692
Tech Center
2600 — Communications
Assignee
Hybriu Inc.
OA Round
1 (Non-Final)
89%
Grant Probability
Favorable
1-2
OA Rounds
2y 1m
To Grant
93%
With Interview

Examiner Intelligence

Grants 89% — above average
89%
Career Allow Rate
1036 granted / 1162 resolved
+27.2% vs TC avg
Minimal +4% lift
Without
With
+4.2%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 1m
Avg Prosecution
38 currently pending
Career history
1200
Total Applications
across all art units

Statute-Specific Performance

§101
4.5%
-35.5% vs TC avg
§103
42.0%
+2.0% vs TC avg
§102
29.1%
-10.9% vs TC avg
§112
5.0%
-35.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1162 resolved cases

Office Action

§102 §103
DETAILED ACTION 1. Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Claim Objections 2. Claim 1 is objected to because --The multimedia-- should be changed to “the multimedia”, --The WIFI-- should be changed to “the Wifi” and --The processing-- should be changed to “the processing”. Appropriate correction is required. Claim Rejections - 35 USC § 102 3. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. 4. Claims 1-8 are rejected under 35 U.S.C. § 102(a)(2) as being anticipated by Segal et al, U.S. Patent Application Publication No. 2016/0373693 (hereinafter Segal). Regarding claim 1, Segal discloses an apparatus (from paragraph 0026, see Internet media extender provided by APPLE TV, ROKU, AMAZON FIRE TV or GOOGLE CHROMECAST) for implementing in-person teaching and online teaching, characterized in that the apparatus is integrally provided with a multimedia module (from paragraph 0029, see various forms of multi-media content), a processing control module, and a WIFI module and an antenna interface (from paragraph 0046, see After an Internet media extender 110 is installed (e.g., connected to a television set and connected to a Wi-Fi), wherein: The multimedia module is configured to collect a first multimedia signal both on-site and from remote in real time and transmit the first multimedia signal to the processing control module, wherein the first multimedia signal both on-site and from remote (from paragraph 0069, see The following is an example of an enterprise use implementation of the present application. A group of people wish to start a data and video conference collaboration session. Some of the people are located in a conference room that includes a TV and a connected Internet media extender (e.g., APPLE TV). Others of the people are in various remote geographic locations, with some using mobile computing devices (e.g., smartphones or tablet computers) and some using desktop/laptop computers. A user associated with the origin host starts the session via the Internet media extender via the TV APP and/or a mobile computing device 104 and via the MOBILE APP, which have been downloaded and installed on at least some of the respective devices 104 and 110 in the conference room and operated by the participants. After an initial configuration, which may occur the first time the MOBILE APP and/or TV APP is launched, audio/video content from one or more of the computing devices 104 is provided instantly on the television 112 via the Internet media extender 110.) at least comprises an instructor's courseware video signal, an instructor's lecturing image/video signal, an instructor's audio signal (from paragraph 0061, see FIGS. 5A-5G illustrate an example implementation and show display screens 500, 510, 520, 530, 540, 550 and 560, representing an implementation of the present application in which a host user presenter is selecting content for recording interactive video content and providing an interactive conference in accordance with an example implementation. As can be seen in the example implementation(s) shown in FIGS. 4-10, a user can establish a video conference, record video content, share the content and interact with the users during the video conference) and an image/video signal of on-site students (from paragraph 0061, see FIG. 5H illustrates an example display screen 570 of a video conference provided on a client device 104 in accordance with an example implementation. As illustrated, the video composed by the server 102 comprising the plurality of video feeds from the respective client devices 104 has been disassembled by the respective client device 102, and the various video feeds of the participants has been arranged according to the user's preference. In one or more implementations, the relative positions and formats of the disassembled users' video feeds can be predefined by the user or in other default configuration, and/or can be manipulated for a particular video conference session. The respective manipulations can be saved and used in future video conference sessions to position/format the respective feeds in accordance with a previous session); The WIFI module and the antenna interface are configured to connect to a wireless controller for receiving an operation instruction transmitted from the wireless controller and sending the operation instruction to the processing control module (from paragraph 0093, see Thus, using the accelerometer and/or gyroscope in the smartphone or other mobile computing device, a virtual pointer, and annotating tool or other selection tool in a coordinated presentation can be remotely controlled. In this way, a smartphone or other mobile computing devices effectively doubles as a mouse, selection device, drawing tool or other interactive device. Unlike infrared or wired connection, the mobile computing device and coordinated presentation authoring/playback device preferably communicate over Wi-Fi. The remote can ask the “master” device via Wi-Fi or other protocol, such as Bluetooth, for permission to connect therewith. The telematics of the mobile computing device, such as an accelerometer and/or gyroscope, is employed over a digital IP connection to transmit to the presentation authoring and/or playback software, which in turn functions to control a simulated laser red dot, drawing tool or other functionality, which can be configured as a core function of the presentation authoring and/or playback application); The processing control module is configured to receive and process the first multimedia signal to obtain a second multimedia signal, and send the processed second multimedia signal to the multimedia module (from paragraph 0046, see During subsequent uses, video content that is provided as a function audio/video output from the computing device (e.g., iPhone) is provided instantly on the television that is connected to the Internet media extender 110. In operation, audio/video feed from the iPhone is provided on big screen); and The multimedia module further comprises at least one set of multimedia output interfaces, configured to connect to, and transmit the processed second multimedia signal received from the processing control module to, a multimedia display device arranged locally and/or remotely for display (from paragraph 0095, see Thus, as shown and described herein, the present application provides a simple to use, yet powerful interactive remote video conferencing platform that incorporates a plurality of computing devices, e.g., smartphones, tablets, laptops and desktops, and enables live, real-time sharing and conferencing. One or more televisions 112 can be implemented in the present application via an Internet media extender 110, and content can be provided from a plurality of remote sources, such as cameras and/or microphones configured with user computing devices 104 that are located remotely and communicating over the Internet). Regarding claim 2, Segal discloses the apparatus of claim 1, characterized in that the multimedia module comprises one set of multimedia input interfaces for inputting a local instructor's courseware video signal (from paragraph 0046, see In one or more implementations, at least one of the Internet media extender components 110 includes APPLE TV. After an Internet media extender 110 is installed (e.g., connected to a television set and connected to a Wi-Fi, Ethernet or other local area network), a software application is installed on the Internet media extender 110, as well as at least one mobile computing device 104. For example, a user downloads and installs an app to an Internet media extender 110 (“TV APP”) and also installs an app to a user computing device 104 (“MOBILE APP”). Once installed, and the first time the TV APP is executed, the user is prompted to launch the MOBILE APP. Thereafter, the mobile computing device 104 (e.g., an iPhone) is automatically detected by the TV APP. During subsequent uses, video content that is provided as a function audio/video output from the computing device (e.g., iPhone) is provided instantly on the television that is connected to the Internet media extender 110. In operation, audio/video feed from the iPhone is provided on big screen. The TV APP and the MOBILE APP may be configured as a single application (e.g., distributed as a single application), or may be provided as separate applications). Regarding claim 3, Segal discloses the apparatus of claim 1, characterized in that the multimedia module comprises one set of network input interfaces configured to connect to a cloud server for uploading a recorded courseware to the cloud server for storage (from paragraph 0054, see Materials associated with a respective session can be stored (e.g., backed up) remotely, e.g., in the “cloud” and be available for access, archived and/or made available for users in the future. Such control can, be restricted from future access, as well). Regarding claim 4, Segal discloses the apparatus of claim 1, characterized in that the multimedia module comprises at least two sets of webcam interfaces for acquiring an instructor's lecturing image/video signal and an image/video signal of on-site students, respectively captured by local webcams (from paragraph 0028, see Thus, in one or more implementations, the present application provides for interactive video conferencing that integrates audio/video input and output from individual mobile computing devices (e.g., smartphones and tablet computers) with Internet media extender devices (e.g., APPLE TV). By leveraging technology configured with mobile computing devices, e.g., cameras and microphones, the present application provides a new form of live and interactive functionality that can make a person's living room or other residential viewing area into a high-end video conferencing suite. Non-residential implementations are supported, as well, as shown and described in greater detail herein). Regarding claim 5, Segal discloses the apparatus of claim 1, characterized in that the multimedia module comprises two sets of multimedia output interfaces; wherein one set of multimedia output interfaces is configured to connect to a head-up display arranged locally, and the other set of multimedia output interfaces is configured to connect to a home screen arranged locally (from paragraph 0095, see Thus, as shown and described herein, the present application provides a simple to use, yet powerful interactive remote video conferencing platform that incorporates a plurality of computing devices, e.g., smartphones, tablets, laptops and desktops, and enables live, real-time sharing and conferencing. One or more televisions 112 can be implemented in the present application via an Internet media extender 110, and content can be provided from a plurality of remote sources, such as cameras and/or microphones configured with user computing devices 104 that are located remotely and communicating over the Internet). Regarding claim 6, Segal discloses the apparatus of claim 1, characterized in that the multimedia module further comprises an audio receiving module configured to receive and store an instructor's audio signal (from paragraph 0071, see The present application supports integration of multiple cameras and microphones that can be connected remotely to an Internet media extender, such as APPLE TV. For example, a plurality of mobile computing devices 104 (e.g., iPhone/iPad/laptop) connect to a respective session and each provides audio/video feed to the Internet media extender 110 and television 112. This is similar, in practice, to a “TouchCast” studio multi-camera setup, which allows multiple cameras to feed into an authoring tool. A description of such an authoring tool is shown and described in greater detail in commonly assigned U.S. Pat. No. 9,363,448, issued Jun. 7, 2016. Supporting live audio/video feed by multiple cameras provides an advantage and technological benefit for multiple people located in the same room and/or remotely located to utilize their respective mobile devices. In one or more implementations, audio detection mechanisms can be employed such that when a user speaks, feed from the microphone and/or camera on that user's respective device is provided on audio output (e.g., speakers) associated with the television 112 (via, for example, the Internet media extender 110), as well as on connected computing devices 104 operated by people remotely located (i.e., not in the local setting). This provides a different and much improved solution to a conference room “bowling-alley-view” of a single camera located at the head of a table, which tries to capture everyone in the conference room. In one or more implementations, cameras associated with the connected computing devices 104 can be “cut to” via one of several ways. In one case, for example, the host user can make selections to switch input from various cameras/devices. In another example case, for example, the MOBILE APP configures the respective computing devices 104 with automatic speaker detection, which operates to detect when a user is speaking and input from that user's respective camera/microphone can be presented to the other user computing devices 104 in the session. In yet another case, for example, a user proactively takes control to have audio/video feed from his or her user computing device 104, which can be effected by simply tapping on the screen of the user's computing device 104, to make that user's device 104 provide the primary feed, and can be presented to the other user computing devices 104 in the session). Regarding claim 7, Segal discloses the apparatus of claim 6, characterized in that the multimedia module further comprises an audio output module configured to output system and remote audio signals (from paragraph 0026, see In accordance with the teachings herein, implementations of the present application provide a simple to use, informing and entertaining communications experience that incorporates content from a plurality of computing devices, e.g., smartphones, tablets, laptops and desktops, and enables live sharing in a real-time and conferencing capability therefore. In one or more implementations, one or more televisions can be used for respective audio/visual display devices, and can provide feed from cameras and/or microphones configured with various local and/or remotely located computing devices that are communicating over data communication networks such as the Internet. A television can be implemented in the present application in various ways, such as via an Internet media extender provided by APPLE TV, ROKU, AMAZON FIRE TV or GOOGLE CHROMECAST). Regarding claim 8, Segal discloses The apparatus of claim 1, characterized in that, further comprising an expansion module configured to connect to at least one of a USB splitter, a wired keyboard & mouse, a wireless keyboard & mouse, a laser pointer with remote control, an audio processor, a sound console, a sound card, and an audio mixer (from paragraph 0025, see By way of introduction and overview, in one or more implementations the present application provides systems and methods for providing interactive video conferencing over one or more data communication networks, such as the Internet. Devices operating, for example, iOS, ANDROID, WINDOWS MOBILE, BLACKBERRY, MAC OS, WINDOWS or other operating systems are configured with one or more software applications that provide functionality, such as with an interface for developing (“authoring”) distributable coordinated presentations. Such presentations can include interactive video having customizable and interactive functionality for and between devices with a plurality of end-users who receive the video. Further, the one or more software applications configure a user computing device with a viewing/interactive tool, referred to herein, generally, as a “consuming” interface for end-users who receive interactive video that are authored in accordance with the present application and usable for end-users to communicate (e.g., via interactive video conferencing functionality). Using the client interface, users may interact with each other and share interactive videos and other content as a function of touch and gestures, as well as graphical screen controls that, when selected, cause a computing device to execute one or more instructions and effect various functionality. For example, a smartphone or other mobile computing device can be configured via one or more applications in accordance with the ability to simulate a laser pointer, drawing tool, mouse, trackball, keyboard or other input device). Claim Rejections - 35 USC § 103 4. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 5. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Segal in view of Kowalik et al, U.S. Patent No. 8,898,743 (hereinafter Kowalik). Regarding claim 9, Segal does not discloses a multi-function interface module that supports at least one of the following protocols: a RS485 protocol, a CAN communication protocol, a UART protocol and a GPIO communication protocol. All the same, Kowalik discloses a multi-function interface module that supports at least one of the following protocols: a RS485 protocol, a CAN communication protocol, a UART protocol and a GPIO communication protocol (from column 9, see The components may be configured to communicate with each other using interfaces such as (but not limited to) one or more universal serial bus (USB) interfaces, micro-USB interfaces, universal asynchronous receiver-transmitter (UART) interfaces, general purpose input/output (GPIO) interfaces (e.g., inter-integrated circuit (i2C)), control/status lines, control/data lines, shared memory, and/or the like). Therefore, it would have been obvious to one of ordinary skill in the art to modify Segal with a multi-function interface module that supports at least one of the following protocols: a RS485 protocol, a CAN communication protocol, a UART protocol and a GPIO communication protocol as taught by Kowalik. This modification would have improved the system’s flexibility by allowing for different interfaces as suggested by Kowalik. Conclusion 6. Any inquiry concerning this communication or earlier communications from the examiner should be directed to OLISA ANWAH whose telephone number is 571-272-7533. The examiner can normally be reached Monday to Friday from 8.30 AM to 6 PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Carolyn Edwards can be reached on 571-270-7136. The fax phone numbers for the organization where this application or proceeding is assigned are 571-273-8300 for regular communications and 571-273-8300 for After Final communications. Any inquiry of a general nature or relating to the status of this application or proceeding should be directed to the receptionist whose telephone number is 571-272-2600. Olisa Anwah Patent Examiner February 4, 2026 /OLISA ANWAH/Primary Examiner, Art Unit 2692 /CAROLYN R EDWARDS/Supervisory Patent Examiner, Art Unit 2692
Read full office action

Prosecution Timeline

Aug 05, 2024
Application Filed
Feb 03, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604130
HEARING DEVICE WITH A BLEEDING CIRCUIT FOR DELIVERING MESSAGES TO A CHARGING DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12598710
Terminal Device
2y 5m to grant Granted Apr 07, 2026
Patent 12597251
VIDEO FRAMING BASED ON TRACKED CHARACTERISTICS OF MEETING PARTICIPANTS
2y 5m to grant Granted Apr 07, 2026
Patent 12596515
FIRST DEVICE, COMMUNICATION SERVER, SECOND DEVICE AND METHODS IN A COMMUNICATIONS NETWORK
2y 5m to grant Granted Apr 07, 2026
Patent 12598437
EARPHONES AND EARPHONE SYSTEM
2y 5m to grant Granted Apr 07, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
89%
Grant Probability
93%
With Interview (+4.2%)
2y 1m
Median Time to Grant
Low
PTA Risk
Based on 1162 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month