Prosecution Insights
Last updated: April 18, 2026
Application No. 17/862,816

Control Method and Apparatus

Final Rejection §103§112§DP
Filed
Jul 12, 2022
Examiner
BLAUFELD, JUSTIN R
Art Unit
2151
Tech Center
2100 — Computer Architecture & Software
Assignee
Huawei Technologies Co., Ltd.
OA Round
4 (Final)
47%
Grant Probability
Moderate
5-6
OA Rounds
3y 5m
To Grant
80%
With Interview

Examiner Intelligence

Grants 47% of resolved cases
47%
Career Allow Rate
235 granted / 500 resolved
-8.0% vs TC avg
Strong +32% interview lift
Without
With
+32.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
66 currently pending
Career history
566
Total Applications
across all art units

Statute-Specific Performance

§101
9.0%
-31.0% vs TC avg
§103
40.7%
+0.7% vs TC avg
§102
24.6%
-15.4% vs TC avg
§112
20.1%
-19.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 500 resolved cases

Office Action

§103 §112 §DP
Detailed Action Notice of Pre-AIA or AIA status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination under 37 C.F.R. § 1.114 A request for continued examination under 37 C.F.R. § 1.114, including the fee set forth in 37 C.F.R. § 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 C.F.R. § 1.114, and the fee set forth in 37 C.F.R. § 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 C.F.R. § 1.114. Applicant's submission filed on September 30, 2024 has been entered. Response to Amendment This Non-Final Office action is responsive to the Request for Continued Examination filed on September 30, 2024 (hereafter “Response”). The amendments to the claims are acknowledged and have been entered. Claims 1, 6, 13, 15, and 19 are now amended. Claims 1–20 are pending in the application. Response to Arguments Applicant’s arguments with respect to the amended claim(s) have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Specification The disclosure is objected to because of the following informalities: (1) The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. The following titles are suggested: Navigation Bar with Integrated AI Function Access Context-Aware Control via a Non-Navigation Button User Interface Control Method for Quick Access to AI and Scene-Based Tasks (2) In paragraph 155, the phrase “service scene task screen” appears to be a typographical error for “scene service task screen.” Appropriate correction is required. Claim Rejections – 35 U.S.C. § 112(a) The following is a quotation of the first paragraph of 35 U.S.C. § 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. § 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Please note: there are two separate rejections (I and II) below, one for enablement of claim 6, and one for lack of written description for claim 13. Both must be addressed to fully respond to this Office Action. I. Enablement Rejection of Claim 6 Claim 6 is rejected under 35 U.S.C. § 112(a) or 35 U.S.C. § 112 (pre-AIA ), first paragraph, as failing to comply with the enablement requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to enable one skilled in the art to which it pertains, or with which it is most nearly connected, to make and/​or use the invention. Claim 6 recites, in relevant part, “determining, by AI, first recommended information based on one or more display objects displayed on the first application interface.” The specification fails to provide an enabling disclosure for this limitation. The term “AI” is used as a “black box” or an oracle that performs the claimed function, but the disclosure provides no substantive details regarding its implementation. A skilled artisan would be required to engage in undue experimentation to implement the “AI” component necessary to practice the claimed method. A claim fails to meet the enablement requirement when the experimentation needed to make or use the invention is “undue” or “unreasonable.” MPEP § 2164.01 (citing Separation Ltd. v. Hyde, 242 U.S. 261, 270 (1916)). The following factors may be considered when determining whether a disclosure satisfies the enablement requirement and whether any necessary experimentation is “undue:” (A) The breadth of the claims; (B) The nature of the invention; (C) The state of the prior art; (D) The level of one of ordinary skill; (E) The level of predictability in the art; (F) The amount of direction provided by the inventor; (G) The existence of working examples; and (H) The quantity of experimentation needed to make or use the invention based on the content of the disclosure. MPEP § 2164.01(a) (citing In re Wands, 858 F.2d 731, 737 (Fed. Cir. 1988)). In this case, the inquiry into the Wands factors provide a preponderance of evidence of non-enablement, as discussed below. A. The Breadth of the Claims. This factor weighs heavily in favor of non-enablement. Under this factor, “the relevant concern is whether the scope of enablement provided to one skilled in the art by the disclosure is commensurate with the scope of protection sought by the claims.” MPEP § 2164.08 (citing AK Steel Corp. v. Sollac, 344 F.3d 1234, 1244 (Fed. Cir. 2003); In re Moore, 439 F.2d 1232, 1236 (CCPA 1971)). In other words, there must be a “reasonable correlation” between the scope of enablement and the scope of the claims. MPEP § 2164.08. The limitation “determining, by AI” is exceptionally broad and amorphous. It encompasses any and all possible artificial intelligence techniques, from simple rule-based expert systems to complex deep learning models, that might be used to recommend information based on on-screen objects. The specification, however, only provides high-level functional descriptions such as “semantic analysis” (para. [0126]) and “screen recognition” (para. [0114]) without disclosing any specific algorithms, model architectures, or implementation details. To support a claim of such vast scope, a correspondingly broad and detailed disclosure is required, which is absent here. (B) The Nature of the Invention and (C) The State of the Prior Art. These factors weigh in favor of non-enablement. The invention lies in the highly complex and unpredictable field of contextual AI and content recommendation. While the general concepts of “semantic analysis” or “speech recognition” existed at the time of filing, their specific implementation as “AI” to derive meaningful “recommended information” from a diverse and arbitrary set of on-screen “display objects” (text, voice, and image) is not a routine or well-established art. The success of such a system is highly dependent on the specific algorithms chosen, the architecture of the model, and the nature and quality of the training data. The prior art does not provide an off-the-shelf AI solution that a skilled artisan could simply “plug in” to achieve the claimed result. The specification fails to fill this gap, leaving a skilled artisan to guess which of the countless possible AI approaches might work. (D) The Level of Ordinary Skill in the Art. A skilled software developer in this field would likely have a degree in computer science or a related field and experience with software development, including familiarity with machine learning libraries and concepts. However, this level of skill does not supplant the need for a specific disclosure. A skilled artisan is not expected to be an inventor. While they could implement known algorithms, they would not know which specific AI algorithms, in which combination, and with which parameters or training data would successfully perform the claimed function of determining recommended information across the full scope of the claim without the guidance that is missing from this specification. (E) The Level of Predictability in the Art. This factor weighs heavily in favor of non-enablement. The field of artificial intelligence is one of the least predictable technical fields. Unlike deterministic arts where components have predictable interactions, AI development is characterized by extensive trial and error. There is no guarantee that a given model architecture or training regimen will result in a functional system. Furthermore, many AI models operate as 'black boxes,' arriving at unexplainable answers where even their creators cannot fully articulate the reasoning behind a specific output. A model might correctly recommend information for one set of display objects but fail inexplicably on a slightly different set. A skilled artisan could not reasonably predict the outcome or behavior of a chosen implementation without first building and testing it through this extensive trial-and-error process, underscoring the profound lack of predictability and the need for a detailed disclosure. (F) The Amount of Direction or Guidance Presented. This factor weighs heavily in favor of non-enablement. The specification provides a critical lack of direction, describing what the “AI” should do but not how to build it. The disclosure is devoid of any AI-specific details, such as learning models (e.g., Naive Bayes, SVM, LSTM, CNN), model architectures, training data, or feature extraction procedures. In fact, the little guidance that is provided points toward conventional, non-AI algorithms, suggesting the term “AI” is merely an arbitrary label. For example, the function of “semantic analysis” to extract keywords is a textbook application for Term Frequency-Inverse Document Frequency (TF-IDF). “Screen recognition” of text and images is readily performed by standard Optical Character Recognition (OCR) and classic computer vision algorithms like Scale-Invariant Feature Transform (SIFT), respectively. Even “speech recognition” is achievable through classic signal processing and pattern matching algorithms like Dynamic Time Warping (DTW). Because all the disclosed functions can be performed by non-AI methods, the specification provides zero guidance for a skilled artisan to implement an “AI” solution specifically, making it impossible to enable a claim that explicitly requires it. (G) The Quantity of Experimentation Necessary. This factor weighs heavily in favor of non-enablement. To practice the invention using the claimed “AI,” a skilled software developer would have to engage in a multi-stage process of trial and error. The process would begin with researching and selecting from dozens of potential AI algorithms for image, text, and voice analysis, followed by designing and architecting a model or system of models capable of integrating these disparate inputs. From there, the developer would have to acquire or generate massive datasets for training. Finally, they would face a lengthy, iterative process of training, tuning, and validating the system. This final step alone requires an enormous amount of trial and error, particularly in selecting and tuning the model's hyperparameters. These parameters—such as learning rate, batch size, and the number of hidden layers—fundamentally control the model's behavior, yet their optimal values are not theoretically derivable and must be discovered through experimentation with countless combinations. This entire workflow constitutes a significant research project, not routine optimization, and is therefore “undue.” (H) The Presence or Absence of Working Examples. This factor weighs heavily in favor of non-enablement. The specification provides no working examples. It describes several hypothetical use-cases (e.g., viewing news in para. [0112], a social application in para. [0124], watching a video in para. [0128]), but it never provides a concrete example of a specific set of “display objects” being processed by a disclosed algorithm to produce a specific piece of “first recommended information.” The absence of a single, reproducible example further demonstrates the non-enabling nature of the disclosure. Weighing the Wands factors, particularly the extreme breadth of the claim, the unpredictable nature of the art, and the profound lack of specific guidance or working examples in the specification, it is concluded that the specification does not teach a skilled artisan how to make and use the invention of claim 6 without undue experimentation. The claim relies on “AI” as a functional black box and is therefore rejected under 35 U.S.C. § 112(a). II. New Matter Rejection of Claims 13–14 Claims 13 and 14 are rejected under 35 U.S.C. § 112(a) or 35 U.S.C. § 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. § 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. The Applicant amended claim 13 to replace a simple determination of second and third applications based on use habits with a limitation that says the second and third application programs themselves “are based on a use habit of the user.” The Applicant does not cite any part of the disclosure for support of the amendment. The Written Description does not disclose application programs that, themselves, are based on use habits. The Written Description merely says that to determine the identities of pre-existing applications based on the user’s use habits. See Spec. ¶ 18 (“the third application program and the fourth application program are determined by the electronic device based on a use habit of the user”); ¶ 99 (“Priorities of different types of scene service tasks may be preset by the user, or may be set according to use habits of most users when the mobile phone is delivered from a factory”) ¶ 146 (“The third application program and the fourth application program [displayed on the scene service task screen] are determined by the electronic device based on a use habit of the user”). Accordingly, claim 13 is rejected for reciting new matter, and claim 14 is rejected because it incorporates the new matter of its parent claim 13 by reference. Claim Rejections – 35 U.S.C. § 112(b) The following is a quotation of 35 U.S.C. § 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. § 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 7 is rejected under 35 U.S.C. § 112(b) or 35 U.S.C. § 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. § 112, the applicant), regards as the invention. The amendment to claim 1 renders the scope of unamended claim 7 unclear, because claim 7 says that displaying the first recommended information in a “floating manner” is merely one alternative among three possible alternatives, whereas claim 1 requires the floating interface. Accordingly, claim 7 is rejected under 35 U.S.C. § 112(b) for being indefinite. Claim Rejections – 35 U.S.C. § 112(d) The following is a quotation of 35 U.S.C. § 112(d): (d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. The following is a quotation of pre-AIA 35 U.S.C. § 112, fourth paragraph: Subject to the following paragraph [i.e., the fifth paragraph of pre-AIA 35 U.S.C. § 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. Claims 2, 4, and 18 are rejected under 35 U.S.C. § 112(d) or pre-AIA 35 U.S.C. § 112, 4th paragraph, as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends. Claim 2 fails to limit claim 1, because the non-navigation button of claim 1 is a button. If the Applicant was attempting to use claim 2 as a way to recite an embodiment with only one button, the Examiner recommends amending claim 2 to say so more clearly, e.g., “The method of claim 1, wherein the navigation bar comprises no more than one non-navigation button.” Claim 4 fails to limit claim 3 because claim 3 incorporates the “floating” limitation by reference to claim 1. Claim 18 fails to limit claim 17 because claim 17 incorporates the “floating” limitation by reference to claim 15. Applicant may cancel the claim(s), amend the claim(s) to place the claim(s) in proper dependent form, rewrite the claim(s) in independent form, or present a sufficient showing that the dependent claim(s) complies with the statutory requirements. Claim Rejections – 35 U.S.C. § 103 The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 C.F.R. § 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. § 102(b)(2)(C) for any potential 35 U.S.C. § 102(a)(2) prior art against the later invention. I. Lee and Brown teach claims 1–8, 11–13, and 15–20. Claim(s) 1–8, 11–13, and 15–20 are rejected under 35 U.S.C. § 103 as being unpatentable over U.S. Patent Application Publication No. 2015/​0015500 A1 (“Lee”) in view of U.S. Patent Application Publication No. 2003/​0142109 A1 (“Brown”). Claim 1 Lee teaches a method comprising: displaying a first interface comprising a navigation bar that comprises a navigation button and at least one non-navigation button Starting with FIG. 4(b), Lee teaches a mobile terminal configured to display an interface comprising a “navigation region” with at least three “navigations buttons” 402–406, “including a cancel button, a home button and a menu button that are implemented as a soft key type.” Lee ¶ 114. In addition to these navigations buttons, Lee’s mobile terminal may further add to its navigation region at least one button for performing a function other than navigation. See Lee ¶ 120 and FIGS. 5 and 7. Three particularly relevant examples of this extra button include the quick voice button 640 shown in FIG. 6C, the volume button 670 shown in FIG. 6E, and the integrated access button shown in FIG. 6F. These will be discussed in detail below, together with their corresponding claim elements where relevant. when the navigation button is triggered, performing at least one of: returning to a previous interface; “[T]he cancel button can be used to . . . cancel an inputted user command,” Lee ¶ 112, thus returning the interface to a state prior to when the user inputted the command. jumping to a home interface “The home button, which is common to the Android OS and the iOS, can be used for . . . a shift to a home screen, an inter job shift in a multi-tasking environment and the like.” Lee ¶ 112. or invoking an interface of an application program accessed within a preset time up to a current moment; “And, the menu button can be used to page an appropriate menu associated with a currently outputted screen.” Lee ¶ 112. receiving a first input of a user on the at least one non-navigation button; The mobile terminal detects when any button in the navigation area is “touched,” including all three examples of non-navigation buttons given above. See Lee ¶¶ 126, 128, and 129. displaying, in response to the first input, a floating artificial intelligence (AI) function entry interface or a scene service task interface that corresponds to the at least one non-navigation button; Quick voice example: “If the quick voice button 640 is touched, referring to FIG. 6C, the controller can enter a state capable of receiving an input of user's voice, i.e., a voice input mode,” Lee ¶ 126, displaying an extra voice-related menu above the navigation region. See Lee FIG. 6C. Lee does not necessarily say that this extra menu is “floating,” but the claim language only requires the AI function entry interface to float, not necessarily the scene service task interface. Thus, at a minimum, Lee’s voice input mode falls within the broadly recited scope of a “scene service task interface,” and the claim language recites the AI function entry interface and scene service task interface as alternatives—a showing of only one is necessary—by using the word “or” between them. The quick voice example falls within the scope of the scene service task interface because it provides an interface that is applicable to the current screen (e.g., to provide voice input to the current screen). See Lee ¶ 126. Volume example: “If the volume button 670 is touched, referring to FIG. 6E, the controller can control a pair of volume adjust buttons (e.g., `+` button 672 and `-` button 674) to be displayed.” Lee ¶ 128. The volume example at least falls within the scope of the scene service task interface because it provides controls that are applicable to the current screen. The volume example also partially falls within the scope of the AI function entry interface, because the speech-bubble appearance of these buttons in FIG. 6E at least suggests that it is a “floating” interface, albeit not necessarily one that displays recommended information as required for the AI function entry interface. Integrated access example: “If the integrated access button is 680 touched, referring to FIG. 6F, the controller can control a plurality of buttons 682, 684 and 686, which are related to a plurality of functions linked to the integrated access button, respectively, to be displayed.” Lee ¶ 129. The integrated access example at least partially falls within the scope of the claimed AI function entry interface because it is overlaid onto the corner of the main interface above the navigation area, and because it displays information that was “may be manually set by a user” (and thus recommended for display by the user). Recent button example: “A recent button is configured to provide a list of recently activated applications. If the recent button 690 is touched, referring to FIG. 6G, the controller can control a recently activated application list 695 to be displayed.” Lee ¶ 130. This example at least falls within the scope of the claimed scene service task interface because it meets the definition of that interface as described in claim 13. “By definition, an independent claim is broader than a claim that depends from it, so if a dependent claim reads on a particular embodiment of the claimed invention, the corresponding independent claim must cover that embodiment as well.” Littelfuse, Inc. v. Mersen USA EP Corp., 29 F. 4th 1376, 1380 (Fed. Cir. 2022) Brown’s resource aid example: As will be discussed below, Brown teaches a resource aid (or help aid) interface that meets each and every limitation of the claimed floating AI function entry interface, and is also triggered in response to a user input to a button. The present rejection considers the obviousness of adding yet another button to Lee’s navigation area that triggers Brown’s resource aids, in view of Lee’s teaching and suggestion that the buttons disclosed in FIGS. 6A–6F are merely non-limiting “examples” and that “various buttons can be configured as additional navigation buttons” other than the examples given. Lee ¶¶ 132–134. wherein the floating AI function entry interface presents first recommended information Lee does not need to teach either of the two limitations for the floating AI function entry interface recited above, because the floating AI function entry interface is one of two alternative limitations, and Lee discloses the other one of those two alternatives. That is, since Lee at least discloses a scene service task interface, Lee does not need to disclose “a floating AI function entry interface,” because disclosing the scene service task interface is sufficient for disclosing “a floating AI function entry interface or a scene service task interface.” And, since Lee does not need to disclose the AI function entry interface to cover every element of this claim, Lee likewise does not need to disclose any further limitations of an element that is not required by the claim. See MPEP § 2111.04, subsection I (“Claim scope is not limited by claim language that suggests or makes optional but does not require steps to be performed, or by claim language that does not limit a claim to a particular structure.”). Additionally, for at least the method claims of this application, the adjustable size or transparency limitation is doubly optional, because it is recited as a contingent limitation with an unmet condition precedent—it only says what could be done if the user provided an adjustment, rather than affirmatively requiring the user to perform the adjustment. See MPEP § 2111.04, subsection II (“The broadest reasonable interpretation of a method (or process) claim having contingent limitations requires only those steps that must be performed and does not include steps that are not required to be performed because the condition(s) precedent are not met.”). Nevertheless, for the sake of compact prosecution, this rejection will now proceed as though the AI function entry interface was recited a required element, and explain why this element would have been obvious even if it were recited non-optionally. In particular, Brown teaches a method (FIG. 6) performed by a computer system 10, comprising: receiving a first input of a user on the at least one non-navigation button; At block 72, the computer system 10 makes “a determination as to whether or not an initiating event is detected.” Brown ¶ 79. “[A]n initiating event may include, but is not limited to, a user directing a cursor over a resource sensitive region or a user defined event occurring” within an “icon, graphic, window and other displayable object.” Brown ¶ 41. “In addition, a displayable object may have a resource sensitive region wherein a user is required to input a key entry, voice entry or other input to initiate the transparent display. A user defined event may include a particular input from the user” Brown ¶ 41. displaying, in response to the first input, a floating artificial intelligence (AI) function entry interface or a scene service task interface that corresponds to the at least one non-navigation button; At block 80, the computer system 10 outputs “resource aid” information in the display area in accordance with user settings obtained in the prior steps. Brown ¶ 80. The information may “may incorporate help aids providing both static and dynamic text. For example, a help aid may be depicted in response to an initiating event in correlation with monitored resource information or independent of any monitored resource information. In addition, help aid contents may adjust according to the status of a particular monitored resource such that help instructions are tailored according to utilization.” Brown ¶ 40. As shown in FIG. 4 (among others), the resource aid(s) are layered over the displayable objects (i.e., the windows and icons in the rest of the interface). See, e.g., Brown ¶ 43 (“In a typical graphical display, there is both a background and foreground. In the present invention, displayable objects displayed in the background and foreground may be adjusted in transparency. In addition, resource aids may be incorporated in both background and foreground”); Brown ¶ 46 (“the displayable object below is seen through a layer of the transparent resource aid”); Brown ¶ 45 (“other elements [are displayed] below the resource aid”). wherein the floating AI function entry interface presents first recommended information based on one or more display objects displayed on the first interface, “Additionally, resource aids may incorporate help aids providing both static and dynamic text.” Brown ¶ 40 “Help aids . . . contain[] information to aid a user in performing a task or understanding the function of an icon, window or other object.” Brown ¶ 19. and wherein a size or a transparency of the floating AI function entry interface can be adjusted by the user. Regarding user-adjusted size, “a user may select to close, minimize, or enlarge a transparent resource indicator, such as transparent resource bubble 62, by moving cursor 43 over the graphical area and clicking or entering other input indicating the transparent resource indicator is to be adjusted.” Brown ¶ 73. Regarding user-adjusted transparency, “block 76 illustrates adjusting the transparency of the graphical output format according to the user transparency preference settings.” Brown ¶ 80. For example, FIG. 2 illustrates a window 80 where the user may define various settings 82–89 for the bubbles, see Brown ¶ 58, and those settings include “transparency selections 89” for setting the criteria and amount of transparency. Brown ¶ 64. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to improve Lee’s user interface by making Brown’s resource aids one of the features that are accessible via Lee’s navigation area icons. One would have been motivated to at least apply Brown’s transparency technique to Lee because transparency would avoid “obscur[ing] the view of other graphics displayed.” Brown ¶ 22. Claim 2 Lee and Brown teach the method of claim 1, wherein the at least one non-navigation button comprises one button. As shown in FIG. 5, the navigation area is customizable such that it may further include one “optionally additional” button in addition to the “mandatory” navigation buttons mapped to the claimed navigation button. See Lee ¶ 117. This one additional button may include any one of the several mentioned in the rejection of claim 1 above. Also, it should be understood that while Lee happens to use the word “navigational” to refer to the “optionally additional” button, this is merely nomenclature,1 and several of the examples of the “optionally additional” button are non-navigational for the reasons given in the rejection of claim 1. Claim 3 Claim 3 requires the method of claim 1, and further clarifies that there are two non-navigation buttons—one for each of the AI function entry interface and the scene service task interface—and that selecting each button causes its corresponding interface to be displayed. Lee likewise teaches that the mobile terminal allows the user to expose multiple ones of the optional buttons to the navigation region 720 (in addition to the cancel, home, and menu buttons, which are deemed “mandatory”). See Lee ¶¶ 135–137. Naturally, regardless of how many total buttons are in the navigation region 720, each selection of each button triggers its corresponding interface to display, as discussed in the rejection of claim 1, above. See Lee ¶¶ 126–131 and FIGS. 6A–6F. Claim 4 Lee and Brown teach the method of claim 3, wherein displaying, in response to the second input, the AI function entry interface comprises displaying, in response to the second input, the AI function entry interface on the first interface in a floating manner. At block 80, the computer system 10 outputs “resource aid” information in the display area in accordance with user settings obtained in the prior steps. Brown ¶ 80. The information may “may incorporate help aids providing both static and dynamic text. For example, a help aid may be depicted in response to an initiating event in correlation with monitored resource information or independent of any monitored resource information. In addition, help aid contents may adjust according to the status of a particular monitored resource such that help instructions are tailored according to utilization.” Brown ¶ 40. As shown in FIG. 4 (among others), the resource aid(s) are layered over the displayable objects (i.e., the windows and icons in the rest of the interface). See, e.g., Brown ¶ 43 (“In a typical graphical display, there is both a background and foreground. In the present invention, displayable objects displayed in the background and foreground may be adjusted in transparency. In addition, resource aids may be incorporated in both background and foreground”); Brown ¶ 46 (“the displayable object below is seen through a layer of the transparent resource aid”); Brown ¶ 45 (“other elements [are displayed] below the resource aid”). Claim 5 Lee and Brown teach the method of claim 3, wherein displaying, in response to the third input, the scene service task interface comprises switching, in response to the third input, to display the scene service task interface on the first interface. Quick voice example: “If the quick voice button 640 is touched, referring to FIG. 6C, the controller can enter a state capable of receiving an input of user's voice, i.e., a voice input mode,” Lee ¶ 126, displaying an extra voice-related menu above the navigation region. See Lee FIG. 6C. Volume example: “If the volume button 670 is touched, referring to FIG. 6E, the controller can control a pair of volume adjust buttons (e.g., `+` button 672 and `-` button 674) to be displayed.” Lee ¶ 128. Recent list example: “A recent button is configured to provide a list of recently activated applications. If the recent button 690 is touched, referring to FIG. 6G, the controller can control a recently activated application list 695 to be displayed.” Lee ¶ 130. Indication list example: “The indication button may be provided to display an indication window for providing information on an occurrence of an event (e.g., a text message newly received by a mobile terminal, a newly received email, an occurrence of an absent call, etc.). If the indication button 620 is touched, referring to FIG. 6B, the controller can control an indication window to be displayed in order to indicate an event occurrence.” Lee ¶ 125. Claim 6 Lee and Brown teach the method of claim 3, wherein the first interface is a first application interface that includes the first button, As shown in FIG. 4(b), Lee teaches a mobile terminal configured to display an interface comprising a “navigation region” with at least one button for performing a function other than navigation. See Lee ¶ 120 and FIGS. 5 and 7. receiving a first preset operation from the user on the first button; Quick voice example: the controller determines “[i]f the quick voice button 640 is touched.” Lee ¶ 126. Integrated access example: the controller determines “[i]f the integrated access button is 680 touched.” Lee ¶ 129. and displaying, in response to the first preset operation, the first recommended information on the first application interface. Quick voice example: “If the quick voice button 640 is touched, referring to FIG. 6C, the controller can enter a state capable of receiving an input of user's voice, i.e., a voice input mode,” Lee ¶ 126, displaying an extra voice-related menu above the navigation region. See Lee FIG. 6C. Integrated access example: “If the integrated access button is 680 touched, referring to FIG. 6F, the controller can control a plurality of buttons 682, 684 and 686, which are related to a plurality of functions linked to the integrated access button, respectively, to be displayed.” Lee ¶ 129. Lee does not explicitly use “AI” to determine information that is “based on the one or more display objects displayed on the first application interface, wherein each of the one or more display objects is at least one piece of text information, voice information, or image information.” Brown, however, teaches a method and system wherein displaying, in response to the second input, the AI function entry interface comprises: determining, by AI, the first recommended information based on the one or more display objects displayed on the first application interface, wherein each of the one or more display objects is at least one piece of text information, voice information, or image information; According to the Applicant, “determining, by AI, the first recommended information” includes “a semantic analysis based on the content presented” on the screen, (Spec. ¶ 126), or even a simple “keyword extraction” of the same. (Spec. ¶ 155). As shown in FIG. 2, the computer system 10 includes settings 86, 88 that determine which resource aids to display based on the underlying meaning of “an icon, window or other displayable object” that “is within the display area.” Brown ¶ 62. For example, “when network capacity is greater than 80%,” the computer system 10 displays a resource aid that concerns networking if the icon, window, or other displayable object in the display area is associated with the network. Brown ¶ 62. As another example, “when a resource level rises above a maximum setting, display of a resource aid for that software component is initiated.” Brown ¶ 63. receiving a first preset operation from the user At block 72, the computer system 10 makes “a determination as to whether or not an initiating event is detected.” Brown ¶ 79. “[A]n initiating event may include, but is not limited to, a user directing a cursor over a resource sensitive region or a user defined event occurring” within an “icon, graphic, window and other displayable object.” Brown ¶ 41. “In addition, a displayable object may have a resource sensitive region wherein a user is required to input a key entry, voice entry or other input to initiate the transparent display. A user defined event may include a particular input from the user” Brown ¶ 41. and displaying, in response to the first preset operation, the first recommended information on the first application interface. At block 80, the computer system 10 outputs “resource aid” information in the display area in accordance with user settings obtained in the prior steps. Brown ¶ 80. Claim 7 Lee and Brown teach the method of claim 6, wherein displaying the first recommended information comprises: displaying the first recommended information in an input box of the first application interface; displaying the first recommended information on the first application interface in a floating manner; or modifying the first application interface to obtain a modified first application interface and displaying the first recommended information on the modified first application interface. Regarding the second alternative limitation, Brown discloses the resource aid(s) are layered over the displayable objects (i.e., the windows and icons in the rest of the interface). See, e.g., Brown ¶ 43 (“In a typical graphical display, there is both a background and foreground. In the present invention, displayable objects displayed in the background and foreground may be adjusted in transparency. In addition, resource aids may be incorporated in both background and foreground”); Brown ¶ 46 (“the displayable object below is seen through a layer of the transparent resource aid”); Brown ¶ 45 (“other elements [are displayed] below the resource aid”). Regarding the third alternative limitation, Brown further teaches “block 76 illustrates adjusting the transparency of the graphical output format according to the user transparency preference settings.” Brown ¶ 80. For example, FIG. 2 illustrates a window 80 where the user may define various settings 82–89 for the bubbles, see Brown ¶ 58, and those settings include “transparency selections 89” for setting the criteria and amount of transparency. Brown ¶ 64. Claim 8 Lee and Brown teach the method of claim 6, wherein the first recommended information is at least one of a web address link, a text, a picture, or an emoticon. “[R]esource aids may incorporate help aids providing both static and dynamic text.” Brown ¶ 40. Claim 11 Lee and Brown teaches the method of claim 1, wherein the AI function entry interface further comprises at least one of a voice search button, an image search button, a text search button, or a save function button. Quick voice example: “If the quick voice button 640 is touched, referring to FIG. 6C, the controller can enter a state capable of receiving an input of user's voice, i.e., a voice input mode,” Lee ¶ 126, displaying an extra voice-related menu above the navigation region. See Lee FIG. 6C. Integrated access example: “If the integrated access button is 680 touched, referring to FIG. 6F, the controller can control a plurality of buttons 682, 684 and 686, which are related to a plurality of functions linked to the integrated access button, respectively, to be displayed.” Lee ¶ 129. “In the example shown in FIG. 6F, if the integrated access button 680 is touched, a quick memo button 684, a quick voice button 682 and a search button 686 are displayed.” Lee ¶ 129. Claim 12 Lee and Brown teach the method of claim 1, wherein displaying the AI function entry interface comprises: performing, in response to the first input, a semantic analysis on content on the first interface to extract one or more keywords; and displaying information corresponding to the one or more keywords on the AI function entry interface. As shown in FIG. 2, the computer system 10 includes settings 86, 88 that determine which resource aids to display based on the underlying meaning of “an icon, window or other displayable object” that “is within the display area.” Brown ¶ 62. For example, “when network capacity is greater than 80%,” the computer system 10 displays a resource aid that concerns networking if the icon, window, or other displayable object in the display area is associated with the network. Brown ¶ 62. As another example, “when a resource level rises above a maximum setting, display of a resource aid for that software component is initiated.” Brown ¶ 63. Claim 13 Lee and Brown teach the method of claim 1, wherein a second application program and a third application program are based on a use habit of the user, wherein the second application program is different than the third application program, “The memory unit 160 is generally used to store various types of data,” including “program instructions for applications operating on the mobile terminal 100.” Lee ¶ 84. “And, a recent use history or a cumulative use frequency of each data (e.g., use frequency for each phonebook, each message or each multimedia) can be stored in the memory unit 160.” Lee ¶ 84. and wherein displaying the scene service task interface comprises: displaying a first shortcut of the second application program at a first preset position on the scene service task interface at a first time; “If the recent button 690 is touched, referring to FIG. 6G, the controller can control a recently activated application list 695 to be displayed.” Lee ¶ 130. For example, the recently activated application list 695 includes a “Subway” application. Lee FIG. 6G. receiving a third preset operation from the user on the first shortcut; The controller detects “[i]f a prescribed application is selected from the recently activated application list.” Lee ¶ 130. displaying, on the scene service task interface in response to the third preset operation, a second interface corresponding to the second application program; “If a prescribed application is selected from the recently activated application list, the controller can activate the selected application.” Lee ¶ 130. displaying a second shortcut of the third application program at the first preset position at a second time, wherein the first time is different from the second time; Lee does not say that recent button 690 is “single use.” Thus, at any time, “[i]f the recent button 690 is touched, referring to FIG. 6G, the controller can control a recently activated application list 695 to be displayed,” Lee ¶ 130, because that is what it is programmed to do. For example, the recently activated application list 695 includes a “Music” application. Lee FIG. 6G. receiving a fourth preset operation from the user on the second shortcut; The controller detects “[i]f a prescribed application is selected from the recently activated application list.” Lee ¶ 130. and displaying, on the scene service task interface in response to the fourth preset operation, a third interface corresponding to the third application program. “If a prescribed application is selected from the recently activated application list, the controller can activate the selected application.” Lee ¶ 130. Claims 15–18 Claims 15–18 recite a general-purpose electronic device programmed to perform the method of claims 1, 11, 3, and 4 (in that order). Lee’s mobile terminal, as improved via Brown’s suggestions, performs the same method as recited in those claims for the reasons given in their rejections above, and further teaches that the mobile terminal has the same components as the claimed electronic device. See Lee ¶ 47. Claims 19 and 20 Claims 19 and 20 are directed to a subset of the components from the general-purpose electronic device of claims 15 and 16, including all of the program instructions stored thereon. Therefore, claims 19 and 20 are rejected over the same findings and rationale provided in the rejections of claims 15 and 16. Any additional components recited in claims 15 and 16 that are not recited in claims 19 and 20 simply fall within the open-ended scope of the claim. See MPEP § 2111.03. II. Lee, Brown, and Cho teach claims 9 and 10. Claim(s) 9 and 10 are rejected under 35 U.S.C. § 103 as being unpatentable over Lee and Brown as applied to claim 8 above, and further in view of U.S. Patent Application Publication No. 2016/​0037311 A1 (“Cho”). Claim 9 Lee and Brown teach the method of claim 8, but neither reference explicitly discloses a “web address link” as the first recommended information. Cho, however, teaches both claim 9 and several limitations that claim 9 incorporates by reference from its ancestor claims 8 and 6. To provide context to those reviewing this rejection, all of the relevant overlapping limitations taught by Cho will be discussed below. Cho teaches, among other things, a method (FIG. 3) comprising: [from claim 6] recommending, by AI, the first recommended information based on the one or more display objects displayed on the first application interface, wherein each of the one or more display objects is at least one piece of text information, voice information, or image information, “The electronic device 100 may extract a keyword from [a] message received from the device of another user 200 in operation S130 . . . by using the semantic analysis method and the statistical analysis method.” Cho ¶¶ 97–98. The message(s) received from the device of the other user and analyzed in S130 are also displayed in a user interface of the device. Cho ¶¶ 86–87. [from claim 8] wherein the first recommended information is at least one of a web address link, a text, a picture, or an emoticon. Regarding the web address link, the content that electronic device 100 recommends may include a link to an article related to the message. Cho ¶ 373. Regarding the text, picture, or emoticon, Cho further teaches that the content electronic device 100 obtains “may include a two-dimensional image, a three-dimensional image, a two-dimensional video, a three-dimensional video, a text reply formed of various languages, content of various fields, and content related to applications providing various services.” Cho ¶ 110. [from claim 6] receiving a first preset operation from the user on the first button; “Via a pop-up window 60, the electronic device 100 may receive a user input regarding whether to display the content obtained with respect to each keyword.” Cho ¶ 91. [from claim 6] and displaying, in response to the first preset operation, the first recommended information on the first application interface; “The electronic device 100 may display the obtained content, when the user touches or taps a first response button 61.” Cho ¶ 91. [from claim 9] wherein the first recommended information is the web address link, As mentioned above, the content that electronic device 100 recommends may include a link to an article related to the message. Cho ¶ 373. and wherein after displaying the first recommended information, the method further comprises: receiving a second
Read full office action

Prosecution Timeline

Jul 12, 2022
Application Filed
Nov 03, 2023
Non-Final Rejection — §103, §112, §DP
Feb 06, 2024
Response Filed
May 01, 2024
Final Rejection — §103, §112, §DP
Aug 02, 2024
Response after Non-Final Action
Sep 16, 2024
Applicant Interview (Telephonic)
Sep 17, 2024
Response after Non-Final Action
Sep 30, 2024
Request for Continued Examination
Oct 15, 2024
Response after Non-Final Action
Sep 22, 2025
Non-Final Rejection — §103, §112, §DP
Dec 19, 2025
Response Filed
Apr 10, 2026
Final Rejection — §103, §112, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598356
System and Method for Analyzing Videos
2y 5m to grant Granted Apr 07, 2026
Patent 12596870
SYSTEM AND METHOD FOR FACT-CHECKING COMPLEX CLAIMS WITH PROGRAM-GUIDED REASONING
2y 5m to grant Granted Apr 07, 2026
Patent 12589692
APPARATUS FOR DRIVER ASSISTANCE AND METHOD OF CONTROLLING THE SAME
2y 5m to grant Granted Mar 31, 2026
Patent 12566533
METHOD, APPARATUS, AND ELECTRONIC DEVICE FOR GENERATING A REMOTE CONTROL APPLICATION
2y 5m to grant Granted Mar 03, 2026
Patent 12568132
METHOD OF ADDING LANGUAGE INTERPRETER DEVICE TO VIDEO CALL
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
47%
Grant Probability
80%
With Interview (+32.5%)
3y 5m
Median Time to Grant
High
PTA Risk
Based on 500 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month