Prosecution Insights
Last updated: April 19, 2026
Application No. 18/916,701

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD AND NON-TRANSITORY COMPUTER-READABLE MEDIUM

Final Rejection §103
Filed
Oct 16, 2024
Examiner
CHAE, KYU
Art Unit
2426
Tech Center
2400 — Computer Networks
Assignee
Yokogawa Electric Corporation
OA Round
2 (Final)
70%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
83%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
429 granted / 616 resolved
+11.6% vs TC avg
Moderate +14% lift
Without
With
+13.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
22 currently pending
Career history
638
Total Applications
across all art units

Statute-Specific Performance

§101
11.1%
-28.9% vs TC avg
§103
57.0%
+17.0% vs TC avg
§102
12.4%
-27.6% vs TC avg
§112
6.3%
-33.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 616 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The Office Action is in response to an AMENDMENT entered 12/7/2025. Status of Claims Claims 1-20 are pending. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3 and 10-20 are rejected under 35 U.S.C. 103 as being unpatentable over US Pub. No. 20150071600 A1 to Azam in view of US Pub. No. 20170127011 A1 to Okajima and in further view of US Pub. No. 20160210407 A1 to Hwang. As to claims 1, 19 and 20, Azam discloses an information processing apparatus, comprising: at least one processor, wherein: the at least one processor acquires a brain wave information of each of a plurality of subjects to whom content, which is common, is provided (Azam ¶0089-0090, 0092-0094, EEG sensor receives neural activity, e.g. brainwaves, for each of the users/subjects that are viewing the video together at the display 104); and the at least one processor controls the content based on the brain wave information of each of the plurality of subjects and subject identification information of each of the plurality of subjects (Azam ¶0089-0090, 0092-0094, system 102 controls the video based on the EEG signals of the brainwaves of the user/subjects 106). Azam does not expressly disclose the at least one processor acquires a chronological change in brain wave information of each of a plurality of subjects to whom content, which is common, is provided; and the at least one processor controls the content based on the chronological change in brain wave information of each of the plurality of subjects and subject identification information of each of the plurality of subjects. Okajima discloses the at least one processor controls the content based on subject identification information of each of the plurality of subjects (Okajima ¶0029-0031, 0036-0044, 0095, circuit 10 that controls the moving picture based on the viewer information e.g. number of viewers, gaze region(s), distance from the display, etc). It would have been obvious to a person of ordinary skilled in the art before the effective filing date of the claimed invention to modify Azam by the at least one processor controls the content based on subject identification information of each of the plurality of subjects as disclosed by Okajima. The suggestion/motivation would have been in order to increase the quality of the video based on the gaze region and/or distance of each of the viewers thereby enhancing the user’s experience. Azam and Okajima do not expressly disclose the at least one processor acquires a chronological change in brain wave information of each of a plurality of subjects to whom content, which is common, is provided; and the at least one processor controls the content based on the chronological change in brain wave information of each of the plurality of subjects. Hwang discloses the at least one processor acquires a chronological change in brain wave information of each of a plurality of subjects to whom content, which is common, is provided (Hwang Fig. 1-7, ¶0049, 0050, 0076, 0081-0085, 0088-0092, 0097-0098 0109-0110, receive bio-signals of the plurality of users who are watching content together, where the bio-signals includes a sequence of change in brainwave signal of the users, See Fig. 3, where the bio-signals are determined in a time sequence, Also see Fig. 5, where the bio-signals in time sequence shows change from 10:22pm and 10:32pm from sharp decrease in concentration levels to detection of sleep wave); and the at least one processor controls the content based on the chronological change in brain wave information of each of the plurality of subjects (Hwang Fig. 1-7, ¶0049, 0050, 0076, 0081-0085, 0088-0092, 0097-0098 0109-0110, displaying the content by changing display region corresponding to user according to each user’s concentration levels based on the bio-signals, where the bio-signals includes a sequence of change in brainwave signal of the users, See Fig. 3, where the bio-signals are determined in a time sequence, Also see Fig. 5, where the bio-signals in time sequence shows change from 10:22pm and 10:32pm from sharp decrease in concentration levels to detection of sleep wave). It would have been obvious to a person of ordinary skilled in the art before the effective filing date of the claimed invention to modify Azam and Okajima by the at least one processor acquires a chronological change in brain wave information of each of a plurality of subjects to whom content, which is common, is provided; and the at least one processor controls the content based on the chronological change in brain wave information of each of the plurality of subjects as disclosed by Hwang. The suggestion/motivation would have been in order to determine the users state using a timeline to increase the accuracy of changes in the user’s brainwave signals over a period of time thereby enhancing the user’s experience. As to claim 2, Azam discloses the at least one processor estimates a state of each of the plurality of subjects based on the chronological change in brain wave information of each of the plurality of subjects (Azam ¶0089-0090, 0092-0094, 0105, 0109, 0130, calculating the emotional state based on the brainwaves of each of the users/subjects and Hwang Fig. 1-7, ¶0049, 0050, 0076, 0081-0085, 0088-0092, 0097-0098 0109-0110, receive bio-signals of the plurality of users who are watching content together, where the bio-signals includes a sequence of change in brainwave signal of the users, See Fig. 3, where the bio-signals are determined in a time sequence, Also see Fig. 5, where the bio-signals in time sequence shows change from 10:22pm and 10:32pm from sharp decrease in concentration levels to detection of sleep wave), the at least one processor controls the content based on the state of each of the plurality of subjects and the subject identification information (Azam ¶0089-0090, 0092-0094, system 102 controls the video based on the EEG signals of the brainwaves of the user/subjects 106 and Okajima ¶0029-0031, 0036-0044, 0095, circuit 10 that controls the moving picture based on the viewer information e.g. number of viewers, gaze region(s), distance from the display, etc and Hwang Fig. 1-7, ¶0049, 0050, 0076, 0081-0085, 0088-0092, 0097-0098 0109-0110, displaying the content by changing display region corresponding to user according to each user’s concentration levels based on the bio-signals, where the bio-signals includes a sequence of change in brainwave signal of the users, See Fig. 3, where the bio-signals are determined in a time sequence, Also see Fig. 5, where the bio-signals in time sequence shows change from 10:22pm and 10:32pm from sharp decrease in concentration levels to detection of sleep wave). As to claim 3, Azam and Okajima discloses wherein: the at least one processor acquires biological information of each of the plurality of subjects provided with the content (Okajima ¶0029-0031, 0036-0044, 0095, age/gender, blood pressure, heart rate); and the at least one processor controls the content based on the chronological change in brain wave information and the biological information of each of the plurality of subjects, and the subject identification information (Azam ¶0089-0090, 0092-0094, system 102 controls the video based on the EEG signals of the brainwaves of the user/subjects 106 and Okajima ¶0029-0031, 0036-0044, 0095, circuit 10 that controls the moving picture based on the viewer information e.g. number of viewers, gaze region(s), distance from the display, and biological information such as heart rate or blood pressure, etc and Hwang Fig. 1-7, ¶0049, 0050, 0076, 0081-0085, 0088-0092, 0097-0098 0109-0110, receive bio-signals of the plurality of users who are watching content together and displaying the content by changing display region corresponding to user according to each user’s concentration levels based on the bio-signals, where the bio-signals includes a sequence of change in brainwave signal of the users, See Fig. 3, where the bio-signals are determined in a time sequence, Also see Fig. 5, where the bio-signals in time sequence shows change from 10:22pm and 10:32pm from sharp decrease in concentration levels to detection of sleep wave). As to claim 10, Hwang discloses the at least one processor acquires the chronological change in brain wave information of each of the plurality of subjects when time-series details of the content is provided (Hwang Fig. 1-7, ¶0049, 0050, 0081-0085, 0088-0092, 0109-0110, timeline of changes in the users brainwaves signals while consuming the content); and the at least one processor estimates the chronological change in the state in each of the plurality of subjects based on the chronological change in brain wave information (Hwang Fig. 1-7, ¶0049, 0050, 0081-0085, 0088-0092, 0109-0110, timeline of changes in the users brainwaves signals while consuming the content used the determining the users changes in different brainwave states). As to claim 11, Hwang discloses the at least one processor acquires the chronological change in brain wave information of each of the plurality of subjects when time-series details of the content is provided (Hwang Fig. 1-7, ¶0049, 0050, 0081-0085, 0088-0092, 0109-0110, timeline of changes in the users brainwaves signals while consuming the content); and the at least one processor estimates the chronological change in the state in each of the plurality of subjects based on the chronological change in brain wave information (Hwang Fig. 1-7, ¶0049, 0050, 0081-0085, 0088-0092, 0109-0110, timeline of changes in the users brainwaves signals while consuming the content used the determining the users changes in different brainwave states). As to claim 12, Azam and Okajima discloses wherein: the subject identification information includes attribute information representing an attribute of the subject (Okajima ¶0029-0031, 0036-0044, 0095, circuit 10 that controls the moving picture based on the viewer information e.g. number of viewers, gaze region(s), distance from the display, etc), the subject being included in the plurality of subjects (Azam ¶0089-0090, 0092-0094 plurality of user/subjects and Okajima ¶0029-0031, 0036-0044, 0095, plurality of users); and the at least one processor controls the content based on the state of each of the plurality of subjects and the attribute information (Okajima ¶0029-0031, 0036-0044, 0095, circuit 10 that controls the moving picture based on the viewer information e.g. number of viewers, gaze region(s), distance from the display, etc). As to claim 13, Azam and Okajima discloses wherein: the subject identification information includes attribute information representing an attribute of the subject (Okajima ¶0029-0031, 0036-0044, 0095, circuit 10 that controls the moving picture based on the viewer information e.g. number of viewers, gaze region(s), distance from the display, etc), the subject being included in the plurality of subjects (Azam ¶0089-0090, 0092-0094 plurality of user/subjects and Okajima ¶0029-0031, 0036-0044, 0095, plurality of users); and the at least one processor controls the content based on the state of each of the plurality of subjects and the attribute information (Okajima ¶0029-0031, 0036-0044, 0095, circuit 10 that controls the moving picture based on the viewer information e.g. number of viewers, gaze region(s), distance from the display, etc). As to claim 14, Azam, Okajima and Hwang discloses wherein: the subject identification information includes position information representing a position of the subject in a real space or a virtual space (Okajima ¶0029-0031, 0036-0044, 0095, 0115-0121, gaze region, distance from the display device), the subject being included in the plurality of subjects (Azam ¶0089-0090, 0092-0094 plurality of user/subjects and Okajima ¶0029-0031, 0036-0044, 0095, plurality of users); and the at least one processor controls the content based on the chronological change in brain wave information and the position information (Okajima ¶0029-0031, 0036-0044, 0095, 0115-0121,circuit 10 that controls the moving picture based on the viewer information e.g. number of viewers, gaze region(s), distance from the display, etc and Hwang Fig. 1-7, ¶0049, 0050, 0076, 0081-0085, 0088-0092, 0097-0098 0109-0110, receive bio-signals of the plurality of users who are watching content together and displaying the content by changing display region corresponding to user according to each user’s concentration levels based on the bio-signals, where the bio-signals includes a sequence of change in brainwave signal of the users, See Fig. 3, where the bio-signals are determined in a time sequence, Also see Fig. 5, where the bio-signals in time sequence shows change from 10:22pm and 10:32pm from sharp decrease in concentration levels to detection of sleep wave). As to claim 15, Azam, Okajima and Hwang discloses wherein: the subject identification information includes position information representing a position of the subject in a real space or a virtual space (Okajima ¶0029-0031, 0036-0044, 0095, 0115-0121, gaze region, distance from the display device), the subject being included in the plurality of subjects (Azam ¶0089-0090, 0092-0094 plurality of user/subjects and Okajima ¶0029-0031, 0036-0044, 0095, plurality of users); and the at least one processor controls the content based on the chronological change in brain wave information and the position information (Okajima ¶0029-0031, 0036-0044, 0095, 0115-0121, circuit 10 that controls the moving picture based on the viewer information e.g. number of viewers, gaze region(s), distance from the display, etc and Hwang Fig. 1-7, ¶0049, 0050, 0076, 0081-0085, 0088-0092, 0097-0098 0109-0110, receive bio-signals of the plurality of users who are watching content together and displaying the content by changing display region corresponding to user according to each user’s concentration levels based on the bio-signals, where the bio-signals includes a sequence of change in brainwave signal of the users, See Fig. 3, where the bio-signals are determined in a time sequence, Also see Fig. 5, where the bio-signals in time sequence shows change from 10:22pm and 10:32pm from sharp decrease in concentration levels to detection of sleep wave). As to claim 16, Okajima and Hwang discloses wherein: a plurality of regions are set in the real space or the virtual space (Okajima ¶0029-0031, 0036-0044, 0095, 0115-0121, gaze region,); and the at least one processor estimates a state of a group of the plurality of subjects in each of the plurality of regions, based on the chronological change in brain wave information of the plurality of subjects, in each of the plurality of regions (Okajima ¶0029-0031, 0036-0044, 0095, 0115-0121, determining attentiveness of the plurality of users in the different viewing location/positions based on the biological information such as brainwave and Hwang Fig. 1-7, ¶0049, 0050, 0076, 0081-0085, 0088-0092, 0097-0098 0109-0110, receive bio-signals of the plurality of users who are watching content together and displaying the content by changing display region corresponding to user according to each user’s concentration levels based on the bio-signals, where the bio-signals includes a sequence of change in brainwave signal of the users, See Fig. 3, where the bio-signals are determined in a time sequence, Also see Fig. 5, where the bio-signals in time sequence shows change from 10:22pm and 10:32pm from sharp decrease in concentration levels to detection of sleep wave). As to claim 17, Azam, Okajima and Hwang discloses wherein: the subject identification information includes terminal identification information that identifies a terminal of the subject (Azam ¶0090, 0137, each sensor 108 preferably comprises a Bluetooth transmitter as it has a low-power consumption profile and transmits a unique identifier so that when multiple EEG sensors 108 are transmitting their data to the Myndplay System 102, each sensor 108 can be distinguished and Okajima ¶0029-0031, 0036-0044, 0095, the viewer information e.g. number of viewers, gaze region(s), distance from the display, etc), the subject being included in the plurality of subjects, which is provided by the content (Azam ¶0089-0090, 0092-0094 plurality of user/subjects and Okajima ¶0029-0031, 0036-0044, 0095, plurality of users); and the at least one processor controls the content based on the chronological change in brain wave information and the terminal identification information (Azam ¶0089-0090, 0092-0094, 0137, system 102 controls the video based on the EEG signals of the brainwaves of the user/subjects 106 and the unique identifier and Okajima ¶0029-0031, 0036-0044, 0095, circuit 10 that controls the moving picture based on the viewer information e.g. number of viewers, gaze region(s), distance from the display, etc and Hwang Fig. 1-7, ¶0049, 0050, 0076, 0081-0085, 0088-0092, 0097-0098 0109-0110, receive bio-signals of the plurality of users who are watching content together and displaying the content by changing display region corresponding to user according to each user’s concentration levels based on the bio-signals, where the bio-signals includes a sequence of change in brainwave signal of the users, See Fig. 3, where the bio-signals are determined in a time sequence, Also see Fig. 5, where the bio-signals in time sequence shows change from 10:22pm and 10:32pm from sharp decrease in concentration levels to detection of sleep wave). As to claim 18, Azam, Okajima and Hwang discloses wherein: the subject identification information includes terminal identification information that identifies a terminal of the subject (Azam ¶0090, 0137, each sensor 108 preferably comprises a Bluetooth transmitter as it has a low-power consumption profile and transmits a unique identifier so that when multiple EEG sensors 108 are transmitting their data to the Myndplay System 102, each sensor 108 can be distinguished and Okajima ¶0029-0031, 0036-0044, 0095, the viewer information e.g. number of viewers, gaze region(s), distance from the display, etc), the subject being included in the plurality of subjects, which is provided by the content (Azam ¶0089-0090, 0092-0094 plurality of user/subjects and Okajima ¶0029-0031, 0036-0044, 0095, plurality of users); and the at least one processor controls the content based on the chronological change in brain wave information and the terminal identification information (Azam ¶0089-0090, 0092-0094, 0137, system 102 controls the video based on the EEG signals of the brainwaves of the user/subjects 106 and the unique identifier and Okajima ¶0029-0031, 0036-0044, 0095, circuit 10 that controls the moving picture based on the viewer information e.g. number of viewers, gaze region(s), distance from the display, etc and Hwang Fig. 1-7, ¶0049, 0050, 0076, 0081-0085, 0088-0092, 0097-0098 0109-0110, receive bio-signals of the plurality of users who are watching content together and displaying the content by changing display region corresponding to user according to each user’s concentration levels based on the bio-signals, where the bio-signals includes a sequence of change in brainwave signal of the users, See Fig. 3, where the bio-signals are determined in a time sequence, Also see Fig. 5, where the bio-signals in time sequence shows change from 10:22pm and 10:32pm from sharp decrease in concentration levels to detection of sleep wave). Claim 4 are rejected under 35 U.S.C. 103 as being unpatentable over US Pub. No. 20150071600 A1 to Azam in view of US Pub. No. 20170127011 A1 to Okajima in further view of US Pub. No. 20160210407 A1 to Hwang and in further view of US Pub. No. 20180301054 A1 to Banergi. As to claim 4, Azam, Okajima and Hwang discloses wherein: the at least one processor further acquires the chronological change in brain wave information during provision of the content (Azam ¶0089-0090, 0092-0094, EEG sensor receives neural activity, e.g. brainwaves, for each of the users/subjects that are viewing the video together at the display 104 while watching the video and Hwang Fig. 1-7, ¶0049, 0050, 0076, 0081-0085, 0088-0092, 0097-0098 0109-0110, receive bio-signals of the plurality of users who are watching content together and displaying the content by changing display region corresponding to user according to each user’s concentration levels based on the bio-signals, where the bio-signals includes a sequence of change in brainwave signal of the users, See Fig. 3, where the bio-signals are determined in a time sequence, Also see Fig. 5, where the bio-signals in time sequence shows change from 10:22pm and 10:32pm from sharp decrease in concentration levels to detection of sleep wave); and the at least one processor estimates the state of each of the plurality of subjects based the biological information (Okajima ¶0029-0031, 0036-0044, 0095, blood pressure, heart rate, Hwang Fig. 1-7, ¶0049, 0050, 0076, 0081-0085, 0088-0092, 0097-0098 0109-0110, receive bio-signals of the plurality of users who are watching content together and displaying the content by changing display region corresponding to user according to each user’s concentration levels based on the bio-signals, where the bio-signals includes a sequence of change in brainwave signal of the users, See Fig. 3, where the bio-signals are determined in a time sequence, Also see Fig. 5, where the bio-signals in time sequence shows change from 10:22pm and 10:32pm from sharp decrease in concentration levels to detection of sleep wave). Azam, Okajima and Hwang do not expressly disclose the at least one processor estimates the state of each of the plurality of subjects based on a change from the chronological change in brain wave information before provision of the content to the chronological change in brain wave information during provision of the content. Banergi discloses the at least one processor estimates the state of each of the plurality of subjects based on a change from the chronological change in brain wave information before provision of the content to the chronological change in brain wave information during provision of the content (Banergi ¶0084, prior to and during subject viewing or exposure to a psychologically stressful or expected stressful scene, the subject's mind state as indicated by their EEG signals or bio-markers). It would have been obvious to a person of ordinary skilled in the art before the effective filing date of the claimed invention to modify Azam, Okajima and Hwang by the at least one processor estimates the state of each of the plurality of subjects based on a change from the chronological change in brain wave information before provision of the content to the chronological change in brain wave information during provision of the content as disclosed by Banergi. The suggestion/motivation would have been in order determine the baseline state of the user before viewing the content and during the content thereby increasing the accuracy of the change in the user’s state enhancing the user’s experience. Allowable Subject Matter Claims 5-9 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Response to Arguments Applicant's arguments with respect to claims 1-4 and 10-20 have been considered but are moot in view of the new ground(s) of rejection. Applicant's arguments filed 12/7/2025 related to claims 1-4 and 10-20 have been fully considered but they are not persuasive. In reference to Applicant's arguments: In reply to the rejection of claims 1-3 and 12-20 under 35 U.S.C. 103 as being purportedly unpatentable over Azam in view of Okajima, the Applicant respectfully requests reconsideration. These claims now recite (as amended) controlling "... the content based on the chronological change in brain wave information of each of the plurality of subjects and subject identification information of each of the plurality of subjects." The recitation of "chronological change" is amended from dependent claim 10. Azam relates to control mechanisms, as specified in the title. Okajima relates to a semiconductor integrated circuit, display device provided with same, and control method, as specified in the title. Page 13 of the Office Action admits (in relation to dependent claim 10) that "Azam and Okajima do not ... disclose the information acquisition unit acquires a chronological change in the brain wave information of each of the plurality of subjects when time-series details of the content is provided; and the state estimation unit estimates the chronological change in the state in each of the plurality of subjects based on the chronological change in the brain wave information". Accordingly, neither Azam nor Okajima teach or suggest (alone or in combination) the emphasized claim recitations of controlling "... the content based on the chronological change in brain wave information of each of the plurality of subjects and subject identification information of each of the plurality of subjects." At least because the cited prior art references do not teach or suggest (alone or in combination) all the recitations of the claims, a prima facie case of obviousness under 35 U.S.C. 103 cannot be established. At least for these reasons, the Applicant respectfully requests withdrawal of this rejection and respectfully solicits a Notice of Allowance. Examiners Response: The examiner respectfully disagrees. Azam, Okajima and Hwang in combination do not expressly disclose the at least one processor acquires a chronological change in brain wave information of each of a plurality of subjects to whom content, which is common, is provided; and the at least one processor controls the content based on the chronological change in brain wave information of each of the plurality of subjects Azam discloses EEG sensor receives neural activity, e.g. brainwaves, for each of the users/subjects that are viewing the video together at the display 104 (Azam ¶0089-0090, 0092-0094,) and the system 102 controls the video based on the EEG signals of the brainwaves of the user/subjects 106 (Azam ¶0089-0090, 0092-0094). Okajima discloses circuit 10 that controls the moving picture based on the viewer information e.g. number of viewers, gaze region(s), distance from the display, etc (Okajima ¶0029-0031, 0036-0044, 0095). Hwang discloses receiving bio-signals of the plurality of users who are watching content together and displaying the content by changing display region corresponding to user according to each user’s concentration levels based on the bio-signals, where the bio-signals includes a sequence of changes in the time domain of the brainwave signal of the users, See Fig. 3, where the bio-signals are determined in a time sequence, Also see Fig. 5, where the bio-signals in time sequence shows change from 10:22pm and 10:32pm from sharp decrease in concentration levels to detection of sleep wave (Hwang Fig. 1-7, ¶0049, 0050, 0076, 0081-0085, 0088-0092, 0097-0098 0109-0110). Therefore, the applicant’s arguments are not persuasive and the examiner respectfully disagrees. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Claims 1-4 and 10-20 are have been rejected. Claims 5-9 are objected Correspondence Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to KYU CHAE whose telephone number is (571)270-5696. The examiner can normally be reached on 8:00am -4:30pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, NASSER MOAZZAMI can be reached on 571-272-4195. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KYU CHAE/ Primary Examiner, Art Unit 2426
Read full office action

Prosecution Timeline

Oct 16, 2024
Application Filed
Sep 05, 2025
Non-Final Rejection — §103
Dec 07, 2025
Response Filed
Mar 16, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598345
Displaying On-Screen Information
2y 5m to grant Granted Apr 07, 2026
Patent 12593105
COMMENT PROCESSING METHOD, ELECTRONIC DEVICE AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12584604
DIGITAL MICRO-MIRROR DEVICE (DMD) SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12587715
SYSTEMS AND METHODS FOR DISPLAYING A CONTENT ITEM BANNER
2y 5m to grant Granted Mar 24, 2026
Patent 12579808
COMPUTERIZED SYSTEM AND METHOD FOR FINE-GRAINED EVENT DETECTION AND CONTENT HOSTING THEREFROM
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
70%
Grant Probability
83%
With Interview (+13.6%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 616 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month