Prosecution Insights
Last updated: April 19, 2026
Application No. 18/455,788

DEVICE CONFIGURATION BASED ON DETECTED USER STATE

Non-Final OA §102§103
Filed
Aug 25, 2023
Examiner
SEYEDVOSOGHI, FARID
Art Unit
2645
Tech Center
2600 — Communications
Assignee
Motorola Mobility LLC
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
374 granted / 450 resolved
+21.1% vs TC avg
Strong +17% interview lift
Without
With
+17.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 3m
Avg Prosecution
19 currently pending
Career history
469
Total Applications
across all art units

Statute-Specific Performance

§101
4.8%
-35.2% vs TC avg
§103
61.0%
+21.0% vs TC avg
§102
15.9%
-24.1% vs TC avg
§112
6.5%
-33.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 450 resolved cases

Office Action

§102 §103
Detailed Action Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statements submitted on 11/17/2023, 02/18/2025, 05/27/2025 and 09/08/2025 have been considered by the Examiner and made of record in the application file. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-2 and 11-12 are rejected under 35 U.S.C. 102(a) (1) as being anticipated by Verbeke et al. (US 11,211,095 Bl, hereinafter Verbeke). Regarding claim 1, Verbeke discloses, a mobile device (see e.g., “the computing device 110 include mobile devices (e.g., cellphone…”, Fig. 1, , column 2, line 51) comprising: one or more device applications configured to at least one of display visual content or emit audio content in a secondary user mode of operation of the mobile device (see e.g., “The multimedia application 145 can be any technically-feasible application that is capable of playing back media content…media content can include audio and/or video content…the multimedia application 145 could be a conventional media player application.”, Fig. 1, column 4, lines 7-11 and/or “the mental state metric can represent a cognitive load associated with the brain activity that the user is employing, an emotional load associated with an emotional state of the user, an amount of mind wandering by the user, or any combination thereof. The media controller application 140 controls playback of media content by the multimedia application 145 based on such a mental state metric”, column 3, lines 24-31; Examiner note: mental state (i.e., of “secondary user mode”)); an access control module implemented at least partially in hardware (see e.g., “The media controller application 140 controls playback of media content by the multimedia application 145 based on such a mental state metric”, column 3, lines 29-31 and/or “the media controller application 140 processes in order to determine an emotional load that a user is experiencing as a mental state metric”, column 4, lines 64-66, the access control module configured to: detect a user state of a user accessing the one or more device applications in the secondary user mode (see e.g., “step 502, where the media controller application 140 receives sensor data”, Fig. 5, column 11, lines 39-40 and/or “the media controller application 140 computes a mental state metric based on the sensor data as a user is consuming media content”, Fig. 5, column 11, lines 55-57 and/or “media controller application 140 determines that the mental state metric satisfies a threshold”, column 12, lines 42-43); and at least one of: categorize a device application as restricted based on the user state being a detected first state (see e.g., “the threshold is a value that, when satisfied (e.g., exceeded), indicates that the user is experiencing a high cognitive load, a high emotional load, and/or a high amount of mind wandering…the mental state metric may include a combination of cognitive load, emotional load, and/or amount of mind wandering,”, column 122, lines 44-66 and/or “the media controller application 140 pauses playback of the media content. In some embodiments, the media controller application 140 can control the multimedia application 145 to pause playback of the media content. In other embodiments, the media controller application 140 may be included as part of the multimedia application 145 and directly pause playback of the media content.”, Fig. 5, column 13, lines 14-20; Examiner’s note: pausing playback of the media (i.e., categorizing application as restricted application)); or categorize the device application as allowed based on the user state being a detected second state (see e.g., “the media controller application 140 determines that the mental state metric no longer satisfies the threshold. Then, at step 514, the media controller application 140 resumes playback of the media content”, Fig. 5, column13, lines 30-33; Examiner’s note: resuming playback of the media content (i.e., categorizing application as allowed application)). Regarding claim 2, Verbeke discloses, wherein the access control module is configured to detect the user state of the user as at least one of an emotion of the user, a physiological state of the user, or a body posture of the user (see e.g., “the mental state metric can represent a cognitive load associated with the brain activity that the user is employing, an emotional load associated with an emotional state of the user, an amount of mind wandering by the user, or any combination thereof. The media controller application 140 controls playback of media content by the multimedia application 145 based on such a mental state metric”, column 3, lines 24-31 and/or “the sensor(s) 120 could include heart rate sensors and/or other biometric sensors that acquire biological and/or physiological signals of the user ( e.g., heart rate, breathing rate, eye motions, GSR, neural brain activity, etc.)”, column 4, lines 56-59 and/or “position sensors (e.g., an accelerometer and/or an inertial measurement unit (IMU)), motion sensors, and so forth, that register the body position and/or movement of the user.”, column 6, lines 49-52 and/or “While driving, a driver of the vehicle is exposed to a variety of stimuli that are related to either a primary task (e.g., guiding the vehicle) and/or any number of secondary tasks…”, column 8, lines 56-67). Regarding claim 11, Verbeke discloses, a method (see e.g., “method for playing back media content”, column 1, line 46), comprising: executing one or more device applications configured to at least one of display visual content or emit audio content in a secondary user mode of device operation (see e.g., “The multimedia application 145 can be any technically-feasible application that is capable of playing back media content…media content can include audio and/or video content…the multimedia application 145 could be a conventional media player application.”, Fig. 1, column 4, lines 7-11 and/or “the mental state metric can represent a cognitive load associated with the brain activity that the user is employing, an emotional load associated with an emotional state of the user, an amount of mind wandering by the user, or any combination thereof. The media controller application 140 controls playback of media content by the multimedia application 145 based on such a mental state metric”, column 3, lines 24-31; Examiner note: mental state (i.e., of “secondary user mode”)); detecting one or more user states of a user accessing the one or more device applications in the secondary user mode (see e.g., “step 502, where the media controller application 140 receives sensor data”, Fig. 5, column 11, lines 39-40 and/or “the media controller application 140 computes a mental state metric based on the sensor data as a user is consuming media content”, Fig. 5, column 11, lines 55-57 and/or “media controller application 140 determines that the mental state metric satisfies a threshold”, column 12, lines 42-43); and at least one of: categorizing a device application as restricted based on a user state being a detected first state (see e.g., “the threshold is a value that, when satisfied (e.g., exceeded), indicates that the user is experiencing a high cognitive load, a high emotional load, and/or a high amount of mind wandering…the mental state metric may include a combination of cognitive load, emotional load, and/or amount of mind wandering,”, column 122, lines 44-66 and/or “the media controller application 140 pauses playback of the media content. In some embodiments, the media controller application 140 can control the multimedia application 145 to pause playback of the media content. In other embodiments, the media controller application 140 may be included as part of the multimedia application 145 and directly pause playback of the media content.”, Fig. 5, column 13, lines 14-20; Examiner’s note: pausing playback of the media (i.e., categorizing application as restricted application)); or categorizing the device application as allowed based on the user state being a detected second state (see e.g., “the media controller application 140 determines that the mental state metric no longer satisfies the threshold. Then, at step 514, the media controller application 140 resumes playback of the media content”, Fig. 5, column13, lines 30-33; Examiner’s note: resuming playback of the media content (i.e., categorizing application as allowed application)). Regarding claim 12, Verbeke discloses, wherein the user state of the user is detected as at least one of an emotion of the user, a physiological state of the user, or a body posture of the user (see e.g., “the mental state metric can represent a cognitive load associated with the brain activity that the user is employing, an emotional load associated with an emotional state of the user, an amount of mind wandering by the user, or any combination thereof. The media controller application 140 controls playback of media content by the multimedia application 145 based on such a mental state metric”, column 3, lines 24-31 and/or “the sensor(s) 120 could include heart rate sensors and/or other biometric sensors that acquire biological and/or physiological signals of the user ( e.g., heart rate, breathing rate, eye motions, GSR, neural brain activity, etc.)”, column 4, lines 56-59 and/or “position sensors (e.g., an accelerometer and/or an inertial measurement unit (IMU)), motion sensors, and so forth, that register the body position and/or movement of the user.”, column 6, lines 49-52 and/or “While driving, a driver of the vehicle is exposed to a variety of stimuli that are related to either a primary task (e.g., guiding the vehicle) and/or any number of secondary tasks…”, column 8, lines 56-67). Claims 19 and 20 are rejected under 35 U.S.C. 102(a) (1) as being anticipated by Srivastava et al. (US 2018/0193652 Al, hereinafter Srivastava). Regarding claim 19, Srivastava discloses, a system (see e.g., Fig. 3-5), comprising: one or more device applications configured to at least one of display visual content or emit audio content (see e.g., “The output unit 242 may also display information including the sensor signals, trends of the signal metric, or any intermediary results for pain score calculation such as the signal metric-specific pain scores. The information may be presented in a table, a chart, a diagram, or any other types of textual, tabular, or graphical presentation formats, for displaying to a system user. The presentation of the output information may include audio or other human-perceptible media format”, [0072] and/or “…at least a portion of the method 500 may be executed by an external programmer or remote server-based patient management system…”, [0094] and/or “The pain score, including the composite pain score and optionally together with metric-specific pain scores, may be displayed on a display screen. Other information, such as the facial image or video sequence or recorded voice or speech, and the signal metrics generated therefrom, may also be output for display…”, [0104] and/or “execute mobile applications (“apps”) to detect the facial or vocal expression”, Fig. 4, [0088]); an access control module implemented at least partially in hardware, the access control module (see e.g., Fig. 3-5) configured to: categorize one or more detected user states as one of constructive or destructive (see e.g., “the recorded voice or speech may be processed at 520, including one or more of speech segmentation, transformation, feature extraction, and pattern recognition. From the processed voice or speech signal, one or more vocal expression metrics corresponding to speech motor control may be extracted, such as by using the speech processor 223, or by executing an mobile application such as the speech motor control analyzer app 436 implemented in the mobile device 300. Examples of the vocal expression metrics related to speech motor control may include speed, volume, pitch, inclination, regularity, and degree of coordination during speech. In some examples, the vocal expression metrics may be measured during a supervised session when the patient rapidly pronounces specific syllables or words that requires fine coordinated movement of jaw, lips, and anterior and posterior tongue. Speech motor slowness such as slower syllable pronunciation, or an increased variability of accuracy in syllable pronunciation, may indicate intensity or duration of pain”, Fig. 5, [0100]); generate an emotional ranking associated with the one or more device applications based at least in part on the one or more user states of the user accessing the respective device applications (see e.g., “a pain score may be generated using the measurements of the signal metrics. The pain score may be generated using a combination of a plurality of facial expression metrics, a combination of a plurality of vocal expression metrics, or a combination of at least one facial expression metric and at least one vocal expression metric. In some examples, one or more signal metrics generated from a physiological or functional signal may additionally be used to generate the pain score. The pain score may be represented as a numerical or categorical value that quantifies overall pain quality in the subject. In an example, a composite signal metric may be generated using a linear or nonlinear combination of the signal metrics respectively weighted by weight factors. The composite signal metric may be categorized as one of a number of degrees of pain by comparing the composite signal metric to one or more threshold values or range values, and a corresponding pain score may be assigned based on the comparison”, Figs. 2-5, [0101]). Regarding claim 20, Srivastava discloses, wherein the access control module is configured to detect each of the one or more detected user states of the user as at least one of an emotion of the user, a physiological state of the user, or a body posture of the user (see e.g., “in addition to the information corresponding to emotional expressions such as facial or vocal expressions, the sensor circuit 210 may further be coupled to one or more sensors to sense a physiological or functional signal. Various physiological signals, such as cardiac, pulmonary, neural, or biochemical signals may demonstrate characteristic signal properties in response to an onset, intensity, severity, duration, or patterns of pain. Examples of the physiological signals may include an electrocardiograph (ECG), intracardiac electrogram, gyrocardiography, magnetocardiography, a heart rate signal, a heart rate variability signal, a cardiovascular pressure signal, a heart sounds signal, a respiratory signal, a thoracic impedance signal, a respiratory sounds signal, or blood chemistry measurements or expression levels of one or more biomarkers. Examples of the functional signals may include patient posture, gait, balance, or physical activity signals, among others”, [0057]). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action: (a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102 of this title, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 3, 8, 13 and 17 are rejected under 35 U.S.C. 103(a) as being unpatentable over Verbeke, in view of Panchaksharaiah et al. (US 2016/0323643 Al, hereinafter Panchaksharaiah). Regarding Claim 3, Verbeke fails to explicitly disclose, wherein the access control module is configured to generate a ranked list of the one or more device applications based at least in part on the user state and associated categorization of the respective device applications. In the same field of endeavor, Panchaksharaiah discloses wherein the access control module is configured to generate a ranked list of the one or more device applications based at least in part on the user state and associated categorization of the respective device applications (see e.g., “control circuitry 304 may identify replacement content based on the level of attentiveness of user 504. For example, control circuitry 304 may determine a level of attentiveness by monitoring the number and frequency of user interactions (e.g., monitoring use of user input interface 310) or by imaging user 504 via sensors accessible to control circuitry 304 (e.g., via camera 508 located on user device 502). Control circuitry 304 may then identify replacement content based on the level of attentiveness of user 504. For example, control circuitry 304 may identify replacement content for a segment of a movie that may be adapted to include a logo or advertisement in a portion of the screen. If the level of user attentiveness is above a threshold level, control circuitry 304 may include a paid advertisement as part of the replacement content. If the level of user attentiveness is not above a threshold level, control circuitry 304 may include a summary of the movie plot instead.”, [0123] and/or “Examples of possible replacement content may include an application, a broadcast channel, a simulated phone call, a simulated news broadcast, a social media update, a website, and some other media that is normally accessible by the device the control circuitry is using to display the media asset on. For example, in the above situation, the control circuitry may identify a social media update that would be of interest to both the child and the child's friend. The control circuitry may then replace the segment with the identified replacement content. Continuing the above example, the control circuitry may generate for display the social media update, covering the screen of the smart-phone and blocking the child from seeing the violent content”, [0006] and/or “the control circuitry may determine a level of attentiveness of the user…the level of attentiveness exceeds a threshold level, the control circuitry may identify replacement content comprising an advertisement and replace the segment of media with the advertisement. If the level of attentiveness does not exceed the threshold, the control circuitry will identify replacement content comprising non-advertising media, and replace the segment of media with the non-advertising media. For example, if the control circuitry determines that a user is focused on a movie, the control circuitry may include a paid advertisement along with the replacement content, but a user who is disengaged from the movie may be presented with a summary of the plot thus far”, [0013]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Verbeke with Panchaksharaiah, in order to adapt the content controls and blocking methods based on a user environment, alleviating the need for users to actively monitor and change their content controls and resulting in an enhanced user experience (please see Panchaksharaiah, paragraph [0002]). Regarding Claim 8, Verbeke fails to explicitly disclose, wherein the access control module is configured to develop an acceptable content list specific to the user, the acceptable content list including at least one of allowable visual content or allowable audio content. In the same field of endeavor, Panchaksharaiah discloses wherein the access control module is configured to develop an acceptable content list specific to the user, the acceptable content list including at least one of allowable visual content or allowable audio content (see e.g., “At 704, control circuitry 304 may identify the next segment of the media asset to be presented to the user...”, Fig. 7, [0130] and/or “At 706, control circuitry 304 may determine characteristics of the user (e.g., user 504 (FIG. 5)). For example, control circuitry 304 may access a locally stored user profile (e.g., from storage 308 (FIG. 3)), to determine a user age, name, and content preferences.”, Fig. 7, [0131] and/or “At 708, control circuitry 304 may determine whether the segment should be presented to user 504. For example, control circuitry 304 may determine from the user preferences that user 504 is indifferent to “blood,” but should not be shown “partial nudity” due to the user's age. Depending on the determination made by control circuitry 304, the process may continue to 710 if the segment may be generated for display to user 504”, Fig. 7, [0132] and/or “At 710, control circuitry 304 may present the segment to the user.”, Fig. 7, [0133]; Examiner’s note: presenting the segment to the user based on determining the characteristic of the user (i.e., “allowable visual content)). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Verbeke with Panchaksharaiah, in order to adapt the content controls and blocking methods based on a user environment, alleviating the need for users to actively monitor and change their content controls and resulting in an enhanced user experience (please see Panchaksharaiah, paragraph [0002]). Regarding Claim 13, Verbeke fails to explicitly disclose, generating a ranked list of the one or more device applications based at least in part on the one or more user states and associated categorization of the respective device applications. In the same field of endeavor, Panchaksharaiah discloses generating a ranked list of the one or more device applications based at least in part on the one or more user states and associated categorization of the respective device applications (see e.g., “control circuitry 304 may identify replacement content based on the level of attentiveness of user 504. For example, control circuitry 304 may determine a level of attentiveness by monitoring the number and frequency of user interactions (e.g., monitoring use of user input interface 310) or by imaging user 504 via sensors accessible to control circuitry 304 (e.g., via camera 508 located on user device 502). Control circuitry 304 may then identify replacement content based on the level of attentiveness of user 504. For example, control circuitry 304 may identify replacement content for a segment of a movie that may be adapted to include a logo or advertisement in a portion of the screen. If the level of user attentiveness is above a threshold level, control circuitry 304 may include a paid advertisement as part of the replacement content. If the level of user attentiveness is not above a threshold level, control circuitry 304 may include a summary of the movie plot instead.”, [0123] and/or “Examples of possible replacement content may include an application, a broadcast channel, a simulated phone call, a simulated news broadcast, a social media update, a website, and some other media that is normally accessible by the device the control circuitry is using to display the media asset on. For example, in the above situation, the control circuitry may identify a social media update that would be of interest to both the child and the child's friend. The control circuitry may then replace the segment with the identified replacement content. Continuing the above example, the control circuitry may generate for display the social media update, covering the screen of the smart-phone and blocking the child from seeing the violent content”, [0006] and/or “the control circuitry may determine a level of attentiveness of the user…the level of attentiveness exceeds a threshold level, the control circuitry may identify replacement content comprising an advertisement and replace the segment of media with the advertisement. If the level of attentiveness does not exceed the threshold, the control circuitry will identify replacement content comprising non-advertising media, and replace the segment of media with the non-advertising media. For example, if the control circuitry determines that a user is focused on a movie, the control circuitry may include a paid advertisement along with the replacement content, but a user who is disengaged from the movie may be presented with a summary of the plot thus far”, [0013]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Verbeke with Panchaksharaiah, in order to adapt the content controls and blocking methods based on a user environment, alleviating the need for users to actively monitor and change their content controls and resulting in an enhanced user experience (please see Panchaksharaiah, paragraph [0002]). Regarding Claim 17, Verbeke fails to explicitly disclose, developing an acceptable content list specific to the user, the acceptable content list including at least one of allowable visual content or allowable audio content. In the same field of endeavor, Panchaksharaiah discloses, developing an acceptable content list specific to the user, the acceptable content list including at least one of allowable visual content or allowable audio content (see e.g., “At 704, control circuitry 304 may identify the next segment of the media asset to be presented to the user...”, Fig. 7, [0130] and/or “At 706, control circuitry 304 may determine characteristics of the user (e.g., user 504 (FIG. 5)). For example, control circuitry 304 may access a locally stored user profile (e.g., from storage 308 (FIG. 3)), to determine a user age, name, and content preferences.”, Fig. 7, [0131] and/or “At 708, control circuitry 304 may determine whether the segment should be presented to user 504. For example, control circuitry 304 may determine from the user preferences that user 504 is indifferent to “blood,” but should not be shown “partial nudity” due to the user's age. Depending on the determination made by control circuitry 304, the process may continue to 710 if the segment may be generated for display to user 504”, Fig. 7, [0132] and/or “At 710, control circuitry 304 may present the segment to the user.”, Fig. 7, [0133]; Examiner’s note: presenting the segment to the user based on determining the characteristic of the user (i.e., “allowable visual content)). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Verbeke with Panchaksharaiah, in order to adapt the content controls and blocking methods based on a user environment, alleviating the need for users to actively monitor and change their content controls and resulting in an enhanced user experience (please see Panchaksharaiah, paragraph [0002]). Claims 4-5 and 14-15 are rejected under 35 U.S.C. 103(a) as being unpatentable over Verbeke, in view of WANG (CN 104516806 A, hereinafter Wang). Regarding Claim 4, Verbeke fails to explicitly disclose, wherein the access control module is configured to generate the ranked list of the one or more device applications to include captured screen shots of the visual content associated with the respective device applications. In the same field of endeavor, Wang discloses, wherein the access control module is configured to generate the ranked list of the one or more device applications to include captured screen shots of the visual content associated with the respective device applications (see e.g., “the display module 205 displays the percentage of power consumption per unit time for each program in the program power consumption ranking display interface.”, [0201] and/or “ranking interface displays the percentage of power consumption per unit time for each program”, [0025] and/or “power-saving button and the name of the recommended application to be closed can also be displayed on the display interface; and after receiving a click event of the power-saving button, the application to be closed will be stopped from running”, [0185]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Verbeke with Wang, in order to display a ranking of each program on the display of the device and provide recommendation or disable the program (please see Wang, paragraphs [0025] and [0136). Regarding Claim 5, Verbeke and Wang combined disclose, wherein the access control module is configured to generate a highlight reel of the captured screen shots of the visual content associated with the respective device applications (see Wang e.g., “The obtained power consumption information is then sorted and displayed as a power consumption leaderboard for each program. Users can then identify programs with high power consumption and close them, prevent them from starting automatically…”, [0005] and/or “the power consumption information of each program of the mobile terminal is extracted from it, and displayed to the user in sorted order. This allows users to effectively identify power-consuming programs…”, [0051]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Verbeke with Wang, in order to display a ranking of each program on the display of the device and provide recommendation or disable the program (please see Wang, paragraphs [0025] and [0136). Regarding Claim 14, Verbeke fails to explicitly disclose, wherein the ranked list of the one or more device applications is generated to include captured screen shots of the visual content associated with the respective device applications In the same field of endeavor, Wang discloses, wherein the access control module is configured to generate the ranked list of the one or more device applications to include captured screen shots of the visual content associated with the respective device applications (see e.g., “the display module 205 displays the percentage of power consumption per unit time for each program in the program power consumption ranking display interface.”, [0201] and/or “ranking interface displays the percentage of power consumption per unit time for each program”, [0025] and/or “power-saving button and the name of the recommended application to be closed can also be displayed on the display interface; and after receiving a click event of the power-saving button, the application to be closed will be stopped from running”, [0185]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Verbeke with Wang, in order to display a ranking of each program on the display of the device and provide recommendation or disable the program (please see Wang, paragraphs [0025] and [0136). Regarding Claim 15, Verbeke and Wang combined disclose, generating a highlight reel of the captured screen shots of the visual content associated with the respective device applications (see Wang e.g., “The obtained power consumption information is then sorted and displayed as a power consumption leaderboard for each program. Users can then identify programs with high power consumption and close them, prevent them from starting automatically…”, [0005] and/or “the power consumption information of each program of the mobile terminal is extracted from it, and displayed to the user in sorted order. This allows users to effectively identify power-consuming programs…”, [0051]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Verbeke with Wang, in order to display a ranking of each program on the display of the device and provide recommendation or disable the program (please see Wang, paragraphs [0025] and [0136). Claims 6-7 and 16 are rejected under 35 U.S.C. 103(a) as being unpatentable over Verbeke, in view of WANG, YAN(CN 108650532 A, hereinafter Yan). Regarding Claim 6, Verbeke fails to explicitly disclose, wherein the access control module is configured to assess similar visual content or similar audio content based at least in part on categorization of the one or more device applications, and at least one of restrict or allow the similar visual content or the similar audio content based on assessment. In the same field of endeavor, Yan discloses, wherein the access control module is configured to assess similar visual content or similar audio content based at least in part on categorization of the one or more device applications, and at least one of restrict or allow the similar visual content or the similar audio content based on assessment (see e.g., “user similarity and program similarity are calculated based on the user-program rating matrix, and a neighborhood recommendation model is used to generate the third program candidate set C3;”, [0038]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Verbeke with Yan, in order to provide recommending cable TV on-demand programs that enables personalized recommendations for users and improves the accuracy and efficiency of recommendations (please see Yan, paragraphs [0009]). Regarding Claim 7, Verbeke and Yan combined disclose, wherein the access control module is configured to recommend allowed visual content that is assessed similar to the visual content or the audio content of the device application categorized as allowed (see e.g., “user similarity and program similarity are calculated based on the user-program rating matrix, and a neighborhood recommendation model is used to generate the third program candidate set C3;”, [0038]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Verbeke with Yan, in order to provide recommending cable TV on-demand programs that enables personalized recommendations for users and improves the accuracy and efficiency of recommendations using similarity (please see Yan, paragraphs [0009]). Regarding Claim 16, Verbeke fails to explicitly disclose, assessing similar visual content or similar audio content based at least in part on categorization of the one or more device applications, and at least one of restricting or allowing the similar visual content or the similar audio content based on assessment. In the same field of endeavor, Yan discloses, assessing similar visual content or similar audio content based at least in part on categorization of the one or more device applications, and at least one of restricting or allowing the similar visual content or the similar audio content based on assessment (see e.g., “user similarity and program similarity are calculated based on the user-program rating matrix, and a neighborhood recommendation model is used to generate the third program candidate set C3;”, [0038]).. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Verbeke with Yan, in order to provide recommending cable TV on-demand programs that enables personalized recommendations for users and improves the accuracy and efficiency of recommendations (please see Yan, paragraphs [0009]). Claims 9 and 18 are rejected under 35 U.S.C. 103(a) as being unpatentable over Verbeke, in view of Srivastava. Regarding Claim 9, Verbeke fails to explicitly disclose, generating an emotional ranking associated with the one or more device applications based at least in part on the one or more user states of the user accessing the respective device applications In the same field of endeavor, Srivastava discloses, generating an emotional ranking associated with the one or more device applications based at least in part on the one or more user states of the user accessing the respective device applications (see e.g., “a pain score may be generated using the measurements of the signal metrics. The pain score may be generated using a combination of a plurality of facial expression metrics, a combination of a plurality of vocal expression metrics, or a combination of at least one facial expression metric and at least one vocal expression metric. In some examples, one or more signal metrics generated from a physiological or functional signal may additionally be used to generate the pain score. The pain score may be represented as a numerical or categorical value that quantifies overall pain quality in the subject. In an example, a composite signal metric may be generated using a linear or nonlinear combination of the signal metrics respectively weighted by weight factors. The composite signal metric may be categorized as one of a number of degrees of pain by comparing the composite signal metric to one or more threshold values or range values, and a corresponding pain score may be assigned based on the comparison”, Figs. 2-5, [0101]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Verbeke with Srivastava, in order to use emotional expressions such as facial or vocal expressions metrics or physiological metrics generating a numerical or categorical value in the application for quantifications of symptoms (please see Srivastava, paragraphs [0064-65]). Regarding Claim 18, Verbeke fails to explicitly disclose, wherein the access control module is configured to generate an emotional ranking associated with the one or more device applications based at least in part on the user state of the user accessing the respective device applications. In the same field of endeavor, Srivastava discloses, wherein the access control module is configured to generate an emotional ranking associated with the one or more device applications based at least in part on the user state of the user accessing the respective device applications (see e.g., “a pain score may be generated using the measurements of the signal metrics. The pain score may be generated using a combination of a plurality of facial expression metrics, a combination of a plurality of vocal expression metrics, or a combination of at least one facial expression metric and at least one vocal expression metric. In some examples, one or more signal metrics generated from a physiological or functional signal may additionally be used to generate the pain score. The pain score may be represented as a numerical or categorical value that quantifies overall pain quality in the subject. In an example, a composite signal metric may be generated using a linear or nonlinear combination of the signal metrics respectively weighted by weight factors. The composite signal metric may be categorized as one of a number of degrees of pain by comparing the composite signal metric to one or more threshold values or range values, and a corresponding pain score may be assigned based on the comparison”, Figs. 2-5, [0101]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Verbeke with Srivastava, in order to use emotional expressions such as facial or vocal expressions metrics or physiological metrics generating a numerical or categorical value in the application for quantifications of symptoms (please see Srivastava, paragraphs [0064-65]). Claims 10 are rejected under 35 U.S.C. 103(a) as being unpatentable over Verbeke, in view of Srivastava, and further in view of Yan. Regarding Claim 10, Verbeke and Srivastava combined fails to explicitly disclose, wherein the access control module is configured to merge the emotional ranking that is associated with the one or more device applications with a different emotional ranking received from another device. In the same field of endeavor, Yan discloses wherein the access control module is configured to merge the emotional ranking that is associated with the one or more device applications with a different emotional ranking received from another device (see e.g., “collecting viewing behavior data of cable television users and crawling metadata of online programs…and each user's rating of each program constitutes a user-program rating matrix… obtaining multiple program candidate sets by employing multiple analysis methods based on the user program rating matrix and the standardized metadata… the user-program rating matrix is decomposed using matrix factorization, user similarity and movie-program similarity are calculated”, [0038]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Verbeke and Srivastava with Yan, in order to provide recommending cable TV on-demand programs that enables personalized recommendations for users and improves the accuracy and efficiency of recommendations (please see Yan, paragraphs [0009]). Conclusion The prior art made of record and not relied upon is considered pertinent to Applicant’s disclosure. Shoemake et al. (US 2015/0070516 A1) disclose AUTOMATIC CONTENT FILTERING. SASTRY et al. (US 2022/0269388 A1) disclose SECURITY/ AUTOMATION SYSTEM CONTROL PANEL GRAPHICAL USER INTERFACE. Lee et al. (US 2018/0181566 A1) disclose METHODS AND SYSTEMS FOR GENERATING A MEDIA CLIP LIBRARY. Resudek (US 2019/0114060 A1) disclose USER INTERFACE COSTOMIZATION BASED ON FACIAL RECOGNITION. Any inquiry concerning this communication or earlier communications from the examiner should be directed to FARID SEYEDVOSOGHI whose telephone number is (571)272-9679. The examiner can normally be reached Mon - Fri 8:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anthony S. Addy can be reached on 5712727795. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /FARID SEYEDVOSOGHI/ Examiner, Art Unit 2645
Read full office action

Prosecution Timeline

Aug 25, 2023
Application Filed
Nov 20, 2025
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598576
METHODS FOR POSITIONING IN LOW-POWER REDUCED CAPABILITY USER EQUIPMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12598247
ELECTRONIC DEVICE AND METHOD FOR CONTROLLING A CONNECTABLE EXTERNAL DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12597348
METHOD AND SYSTEM FOR INTER AND INTRA AGENTCY COMMUNICATION, TRACKING AND COORDINATION
2y 5m to grant Granted Apr 07, 2026
Patent 12587270
Cellular Core Network and Radio Access Network Infrastructure and Management in Space
2y 5m to grant Granted Mar 24, 2026
Patent 12581400
WIRELESS COMMUNICATION METHOD IN WIRELESS FIDELITY NETWORK AND ELECTRONIC DEVICE FOR PERFORMING THE SAME
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
99%
With Interview (+17.0%)
2y 3m
Median Time to Grant
Low
PTA Risk
Based on 450 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month