Prosecution Insights
Last updated: April 19, 2026
Application No. 18/369,992

APPARATUS AND METHOD FOR PROVIDING CONTENT SEARCH USING KEYPAD IN ELECTRONIC DEVICE

Non-Final OA §103
Filed
Sep 19, 2023
Examiner
BLAUFELD, JUSTIN R
Art Unit
2151
Tech Center
2100 — Computer Architecture & Software
Assignee
Samsung Electronics Co., Ltd.
OA Round
4 (Non-Final)
47%
Grant Probability
Moderate
4-5
OA Rounds
3y 5m
To Grant
80%
With Interview

Examiner Intelligence

Grants 47% of resolved cases
47%
Career Allow Rate
235 granted / 500 resolved
-8.0% vs TC avg
Strong +32% interview lift
Without
With
+32.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
66 currently pending
Career history
566
Total Applications
across all art units

Statute-Specific Performance

§101
9.0%
-31.0% vs TC avg
§103
40.7%
+0.7% vs TC avg
§102
24.6%
-15.4% vs TC avg
§112
20.1%
-19.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 500 resolved cases

Office Action

§103
Detailed Action Notice of Pre-AIA or AIA status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination under 37 C.F.R. § 1.114 A request for continued examination under 37 C.F.R. § 1.114, including the fee set forth in 37 C.F.R. § 1.17(e), was filed in this application after allowance or after an Office action under Ex Parte Quayle, 25 USPQ 74, 453 O.G. 213 (Comm'r Pat. 1935). Since this application is eligible for continued examination under 37 C.F.R. § 1.114, and the fee set forth in 37 C.F.R. § 1.17(e) has been timely paid, prosecution in this application has been reopened pursuant to 37 C.F.R. § 1.114. Applicant's submission filed on December 9, 2025 has been entered and considered. Response to Amendment This Non-Final Office action is responsive to the Request for Continued Examination filed on December 9, 2025 (hereafter “Response”), which directs reconsideration of the June 2, 2025 amendments to the claims in light of the information disclosure statement filed with the December 9, 2025 Response. Response to Arguments Applicant’s arguments with respect to the claim(s) have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Objections Claim 21 was added in the December 27, 2024 amendment, but the most recent listing of claims (June 2, 2025) never mention it. Clarification is required; the Examiner will provide a rejection of the last presented version of claim 21, in the event the Applicant never cancelled it. Claim Rejections – 35 U.S.C. § 103 The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned at the time any inventions covered therein were effectively filed absent any evidence to the contrary. Applicant is advised of the obligation under 37 C.F.R. § 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned at the time a later invention was effectively filed in order for the examiner to consider the applicability of 35 U.S.C. § 102(b)(2)(C) for any potential 35 U.S.C. § 102(a)(2) prior art against the later invention. I. Huang and Mohsin teach claims 1–3, 5–9, 13, 15–17, and 19. Claims 1–3, 5–9, 13, 15–17, and 19 are rejected under 35 U.S.C. § 103 as being unpatentable over U.S. Patent Application Publication No. 2017/​0083524 A1 (“Huang”) in view of U.S. Patent Application Publication No. 2017/​0308591 A1 (“Mohsin”). Claim 1 Huang teaches: An electronic device comprising: a display; a wireless communication circuit; a memory; and a processor configured to be operatively connected to the display, the wireless communication circuit, and the memory, FIGS. 1A, 1B, and 10 illustrate a user device 102a (or 1000) that performs the functionality described in Huang’s disclosure. “Computing platform 1000 includes a bus 1004 or other communication mechanism for communicating information, which interconnects subsystems and devices, such as processor 1006, system memory 1010 (e.g., RAM, etc.), storage device 1008 (e.g., ROM, etc.), a communication interface 1012 (e.g., an Ethernet or wireless controller, a Bluetooth controller, etc.) to facilitate communications via a port on communication link 1014.” Huang ¶ 103. “Computing platform 1000 exchanges data representing inputs and outputs via input-and-output devices 1002, including, but not limited to, keyboards, mice, audio inputs (e.g., speech-to-text devices), user interfaces, displays, monitors, cursors, touch-sensitive displays, LCD or LED displays, and other I/​O-related devices.” Huang ¶ 103. wherein the processor is configured to: “According to some examples, computing platform 1000 performs specific operations by processor 1006 executing one or more sequences of one or more instructions stored in system memory 1010.” Huang ¶ 104. Those instructions, and their effect, will be discussed together with each of the Applicant’s claimed processor functions below. detect a first input for calling a keypad while executing an application; As shown in FIG. 8A, “[a] search query field 806” may receive an input from a user requesting “to perform a search on the media content management system 100 using text strings.” Huang ¶ 88. The search query field 806 is both displayed and receives the user’s input during the execution of “a native mobile application for texting, specifically the IMESSAGE platform through APPLE IOS.” Huang ¶ 88. display the keypad on an execution screen of the application; In response to selecting the search query field, “[a] text keyboard appears to enable a searching user to enter or input a text string to search for content items in the media content management system 100.” Huang ¶ 95 (referring to FIG. 9A). detect a second input for a search while displaying the keypad; “In this example, the word ‘Happy’ is entered into the text search query field 900, as shown in FIG. 9B.” Huang ¶ 95. perform, in response to the second input, the search Once the search query is received from the user, it “may then be parsed 714 into one or more overlapping windows of content, where each window includes at least one word of the search query. Here, a word may include a portion of a word, such as ‘ha’ of the word ‘happy.’ A candidate set of media content items may be determined 716 from the media content items maintained 710 in the media content system based on at least one overlapping window of the one or more overlapping windows of content matching one or more expressive intent metadata content associations associated with the candidate set.” Huang ¶ 82. using a plurality of applications installed in the electronic device; As shown in FIG. 1B, in order to perform the search, “a dynamic keyboard application 130 installed on the user device” directs a “search interface module 120” to search various data stores for the results. Huang ¶ 32. classify each of a search results searched by the plurality of applications based on categories of contents in the search results; and “A first candidate set of media content items may be determined 724 from the media content items in the media content system based, at least in part, on at least one word of the search query matching an expressive intent metadata content association associated with the one or more media content items included in the candidate set.” Huang ¶ 84. display the classified search results at least including a first result and a second result, “The first candidate set of media content items are then provided 726 in the dynamic keyboard interface in response to the first search query. The dynamic keyboard interface may render the first candidate set of media content items on the mobile application on the mobile device concurrently and in animation.” Huang ¶ 84. wherein the first result at least includes a first object and a second object, and the second result at least includes a third object, and a fourth object, By displaying renderings of the at least two media content items 104 (the claimed first and second results), Huang at least teaches displaying a first result with a first object (the rendering of the first result) and a second result with a second object (the rendering of the second result). See Huang ¶ 96 and FIGS. 8E and 9C (both illustrating several objects in the dynamic keyboard interface, each one of which corresponds to a media content item 104 that was returned from the search query). Additionally, depending on the broadest reasonable interpretation of the word “includes” in “the first result at least includes a first object and a second object, and the second result at least includes a third object, and a fourth object,” it could be argued that Huang’s results at least “include” these third and fourth objects, because each of Huang’s media content items 104 include third and fourth objects that are accessible by navigating first and second menus 826 corresponding to each of the media content items 104. Once the control menu 826 for the first media content item is activated, a control menu will display several options, including an option for “sharing the selected content item in various messaging platforms.” Huang ¶ 93. Likewise, the control menu 826 for the second media content item contains the same list of options, but applied to the second media content item. perform, based on receiving an input for selecting of the first object, a first function related to the first result, “As further illustrated in FIG. 9E, after selecting the option to paste the selected media content item 906, the shared selected media content item 914 is displayed within the text message field or IMESSAGE application.” Huang ¶ 98. perform, based on receiving an input for selecting of the second object, a second function related to the first result, the second function different from the first function, Selecting any one of those menu options causes the device to perform the function so described using the currently selected content item. In this rejection, this function was described as “sharing the selected content item in various messaging platforms,” Huang ¶ 93, which is different from pasting it specifically into the IMESSAGE application. wherein performing the first function includes: generating shared information related to the first object; and inputting the shared information into an input area of the keypad in a format according to a content attribute of the first object, The pasting option “may take advantage of the operating system of the user device 102, in one embodiment, such that the selected media content item 144 is not stored permanently onto the user device 102,” so that it may be “pasted into the messaging user interface 142 of the messaging application 140.” Huang ¶ 34. and wherein performing the second function includes: executing an application according to a content attribute of the second object Meanwhile, as mentioned above, the second function corresponds to “sharing the selected content item in various messaging platforms,” Huang ¶ 93, which necessarily involves executing the various other messaging platforms, in order to share there. Per the above discussion the only difference between Huang and the claimed invention is that Huang’s second and fourth objects are displayed on a separate page from Huang’s first and third objects, and Huang’s second function does not display a screen of the executed application without displaying the keypad, following selection of the second (or fourth) object. Mohsin, however, teaches a computing device 110 with both these and several other heavily overlapping elements, including: An electronic device comprising: a display; a wireless communication circuit; a memory; and a processor configured to be operatively connected to the display, the wireless communication circuit, and the memory, “Computing device 110 includes a presence-sensitive display (PSD) 112, a user interface (UI) module 120, and a keyboard module 122. Modules 120 and 122 may perform operations described using software, hardware, firmware, or a mixture of hardware, software, and firmware residing in and/​or executing at computing device 110. For example, one or more processors of computing device 110 may execute instructions that are stored at a memory or other non-transitory storage medium of computing device 110 to perform the operations of modules 120 and 122.” Mohsin ¶ 15. wherein the processor is configured to: “FIG. 5 is a flowchart illustrating example operations of a computing device that is configured to present a graphical keyboard with integrated search features, in accordance with one or more aspects of the present disclosure. The operations of FIG. 5 may be performed by one or more processors of a computing device, such as computing devices 110 of FIG. 1.” Mohsin ¶ 130. display the keypad on an execution screen of the application; “FIG. 5 includes outputting, by computing device 110, for display (e.g., at PSD 112), a graphical keyboard 116B comprising a plurality of keys 118A and a suggestion region 118B (502).” Mohsin ¶ 131. Furthermore, as shown in FIG. 1, the graphical keyboard 116B is displayed simultaneously with an “output region 116A,” which belongs to a messaging application executing on computing device 110. Mohsin ¶ 18. detect a second input for a search while displaying the keypad; Returning to FIG. 5, computing device 110 performs step 504 of determining, “based on a selection of . . . one or more keys from the plurality of keys 118A, a search query.” Mohsin ¶ 132. perform, in response to the second input, the search “The technique of FIG. 5 may further include retrieving, by computing device 110, one or more search results determined based on the search query (506).” Mohsin ¶ 135. display the classified search results at least including a first result and a second result, “Further, the technique of FIG. 5 includes outputting, by computing device 110, in place of at least a portion of graphical keyboard 116B, a visual representation of a search result of the one or more search results (508). For example, keyboard module 122 may output an indication of a visual representation of the one or more search results that include one or more card-based user interface element, each card-based user interface element being associated with a respective search result of the one or more search results.” Mohsin ¶ 136 (emphasis added to highlight that some embodiments output more than one results via more than one card-based user interface element, and thus, a “first” and “second” result). wherein the first result at least includes a first object and a second object, As shown in FIG. 4B, each card-based user interface element 420 (e.g., 420C) “includes a first predetermined portion 422C and a second predetermined portion 422D.” Mohsin ¶ 122. The claimed “first object” maps to the first predetermined portion 422C, while the claimed “second object” is broad enough to map to either the “second predetermined portion 422D,” or, to any single one of the icons 424 displayed therein. See Mohsin ¶¶ 126–127. and the second result at least includes a third object, and a fourth object, The above disclosure applies to all of the one or more search results mentioned above. So, just as card-based user interface element 420C for a first result displayed the first and second objects mentioned above, so too does another card-based user interface element for a second result display corresponding objects as the claimed third and fourth object. perform, based on receiving an input for selecting of the first object, a first function related to the first result, FIG. 5 continues by “determining, by computing device 110, based on user input (e.g., at PSD 112), selection of a predetermined portion of the visual representation of the search result (510). For example, keyboard module 122 may receive, from UI module 120, an indication of user input (e.g., a touch event) selecting a first portion of the visual representation of the search result,” and in response, “automatically, without further user input, inserting, by computing device 110, in a text edit region 116C displayed adjacent to graphical keyboard 116B, information related to the search result (512).” Mohsin ¶ 137. perform, based on receiving an input for selecting of the second object, a second function related to the first result, the second function different from the first function, On the other hand, “[i]n response to receiving, from PSD 112, a touch event, UI module 120 or keyboard module 122 may determine selection of one of icons 424B–424D in the second predetermined portion 422D, and may transmit an indication of the selection of the one of icons 424B–424D to the associated application, which may perform the action associated with the selected one of icons 424B–424D.” Mohsin ¶ 128. These actions are different from the action of inserting the search result into the text edit region. See Mohsin ¶ 127 (describing the actions performed by each of the respective icons, none of which involve inserting the search result into the text edit region). wherein performing the first function includes: generating shared information related to the first object; and inputting the shared information into an input area of the keypad in a format according to a content attribute of the first object, “In response to determining the selection of first predetermined portion 422C, keyboard module 122 may be configured to automatically, without further user input, output an indication of information related to of the search result to UI module 120 and cause UI module 120 to insert the information related to the search result in text edit region 416C.” Mohsin ¶ 123. and wherein performing the second function includes: executing an application according to a content attribute of the second object; and displaying a screen of the executed application without displaying the keypad. “Phone icon 424B is associated with an action of opening a phone application and calling Sandwich Place (i.e., the establishment associated with the search result). Navigation icon 424C is associated with an action of opening a navigation or maps application and retrieving directions to Sandwich Place (i.e., the establishment associated with the search result). Go-to icon 424D is associated with an action of opening a new application related to the search result displayed in card-based user interface element 420C, such as a search application or a website from which the information in the search result was retrieved.” Mohsin ¶ 127. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine all of Huang’s menu options for each of Huang’s search results into a single screen, as explicitly taught by Mohsin. One would have been motivated to modify Huang according to Mohsin because switching back and forth between multiple interfaces “result[s] in an inelegant and inefficient user experience.” Mohsin ¶ 2. Claim 2 Huang and Mohsin teach the electronic device of claim 1, wherein the processor is configured to: classify the search result based on attributes of the contents in the search result when classifying the search result; “A search query including a selection of a collection of the multitude of collections may be received 734, where the selected collection is associated with an expressive intent metadata content association and where the search query is received 734 through the dynamic keyboard interface. A first candidate set of content items is determined 736 from the multitude of content items based on the expressive intent metadata content association associated with the one or more content items included in the candidate set.” Huang ¶ 87. and generate a plurality of groups of the contents based on the classified search result, each of the plurality of groups including contents grouped per a respective attribute of content. “The first candidate set of content items is then provided 738 in the dynamic keyboard interface in response to the search query, where the dynamic keyboard interface renders the first candidate set of content items on the mobile application on the mobile device.” Huang ¶ 87. In addition to the first candidate set from the first category, the mobile application also returns results in groups that are responsive to the search and fall into distinct other categories. For example, as shown in FIG. 8B, “tab interface 804 may include an icon that navigates to user generated collections 810, an icon that navigates to emotive curated collections 812, an icon that navigates to expressive curated collections 814, an icon that navigates to trending media content items 816, and an icon that navigates to audio/​visual curated content items 818.” Huang ¶ 89. Claim 3 Huang and Mohsin teach the electronic device of claim 1, wherein the processor is configured to: classify the search result based on attributes of the contents in the search result and attributes of applications used when performing the search, and “A search query including a selection of a collection of the multitude of collections may be received 734, where the selected collection is associated with an expressive intent metadata content association and where the search query is received 734 through the dynamic keyboard interface. A first candidate set of content items is determined 736 from the multitude of content items based on the expressive intent metadata content association associated with the one or more content items included in the candidate set.” Huang ¶ 87. generate a plurality of groups of contents based on the classified search result, each of the plurality of groups including contents grouped per at least one of a respective attribute of content or a respective attribute of application. “The first candidate set of content items is then provided 738 in the dynamic keyboard interface in response to the search query, where the dynamic keyboard interface renders the first candidate set of content items on the mobile application on the mobile device.” Huang ¶ 87. In addition to the first candidate set from the first category, the mobile application also returns results in groups that are responsive to the search and fall into distinct other categories. For example, as shown in FIG. 8B, “tab interface 804 may include an icon that navigates to user generated collections 810, an icon that navigates to emotive curated collections 812, an icon that navigates to expressive curated collections 814, an icon that navigates to trending media content items 816, and an icon that navigates to audio/​visual curated content items 818.” Huang ¶ 89. Claim 5 Huang and Mohsin teach the electronic device of claim 1, wherein the processor is configured to: call the keypad based on the detecting of the first input while displaying the execution screen; and control the display to display the keypad in at least a partial area of the execution screen. “FIG. 9 illustrates a text search query field 900. A text keyboard appears to enable a searching user to enter or input a text string to search for content items in the media content management system 100.” Huang ¶ 95. FIG. 9A further shows that the text messaging application’s screen is displayed in the top half of the display, when the text keyboard is summoned. Claim 6 Huang and Mohsin teach the electronic device of claim 1, wherein the processor is configured to: receive a keyword and a search command as the second input while displaying the keypad through the display; “In this example, the word ‘Happy’ is entered into the text search query field 900, as shown in FIG. 9B.” Huang ¶ 95. and control a plurality of applications to perform the search based on the keyword. The broadest reasonable interpretation of controlling a plurality of applications to perform “the” search does not require each application to perform its own separate search, because the Applicant used the reference back term “the” in order to invoke the singular smart search mentioned in claim 1. Accordingly, any disclosure of two or more applications facilitating a single search falls within the scope of this claim language. In this case the two applications that form the claimed “plurality of applications” performing the smart search include the “dynamic keyboard application 130 installed on the user device 102b,” which receives the user’s search input and provides to “search interface module 120” (the second application of the plurality), which subsequently looks for matching content items in the content data stores. Huang ¶¶ 32–33. Claim 7 Huang and Mohsin teach the electronic device of claim 6, wherein the processor is configured to: acquire the search result from the at least one application among the plurality of applications; “The first candidate set of content items is then provided 738 in the dynamic keyboard interface in response to the search query, where the dynamic keyboard interface renders the first candidate set of content items on the mobile application on the mobile device.” Huang ¶ 87. and provide the acquired search result to the keypad. “FIG. 9C illustrates media content items 104 matching the search term ‘Happy’ in the media content management system 100, in one embodiment.” Huang ¶ 96. Claim 8 Huang and Mohsin teach the electronic device of claim 6, wherein the processor is configured to: remove at least a part of a key map of the keypad and convert an area corresponding to the part of the key map into a view area for a content display. As shown in FIG. 9C, “[s]earch results 904 are rendered in the dynamic keyboard interface concurrently and in animation,” which replace the keyboard that was previously used to enter the “Happy” query in FIG. 9A. Huang ¶ 96. Claim 9 Huang and Mohsin teach the electronic device of claim 1, wherein the processor is configured to: classify the search result by at least one application into different categories. “A search query including a selection of a collection of the multitude of collections may be received 734, where the selected collection is associated with an expressive intent metadata content association and where the search query is received 734 through the dynamic keyboard interface. A first candidate set of content items is determined 736 from the multitude of content items based on the expressive intent metadata content association associated with the one or more content items included in the candidate set.” Huang ¶ 87. Claim 13 Huang and Mohsin teach the electronic device of claim 1, wherein the processor is configured to: provide an integrated search of content for each application, using a learning model trained using an artificial intelligence algorithm. “A content associator module 108 may automatically generate one or more content associations for a media content item 104 in the media content management system 100 based on the attributes of the media content item 104. For example, machine learning techniques may be used by the content associator module 108 to determine relationships between media content items 104 and content associations stored in the content association store 118.” Huang ¶ 40. Claims 15–17 Claims 15–17 recite the same method that the electronic device of claims 1–3, performs as part of its normal operation. As such, the aforementioned method claims are rejected according to the same findings and rationale as provided above for their corresponding device claims. Claim 19 Claim 19 recites the same method that the electronic device of claim 8 performs as part of its normal operation. As such, the aforementioned method claims are rejected according to the same findings and rationale as provided above for their corresponding device claims. II. Huang, Mohsin, and Shapira teach claims 4, 14, 18, and 21. Claims 4, 14, 18, and 21 are rejected under 35 U.S.C. § 103 as being unpatentable over Huang in view of Mohsin as applied to claims 3, 1, and 15 above, and further in view of U.S. Patent Application Publication No. 2014/​0250147 A1 (“Shapira”). Claim 4 Huang teaches the electronic device of claim 3, but neither Huang nor Mohsin explicitly disclose a plurality of applications generating the content, let alone classifying the content based on the plurality of applications. Shapira, however, teaches a system that substantially overlaps Huang with nearly all of the elements of claim 1, and further teaches: the content is generated by a plurality of applications, and the attributes of the content is classified according to the plurality of applications. “The set of result objects 362 includes result objects 362 corresponding to the YELP® and OPENTABLE® applications.” Shapira ¶ 133. Specifically, the result object 362a for YELP® corresponds to the claimed first information result, while the result object 362b for OPENTABLE® corresponds to the claimed second information result. The rationale to support this conclusion “is that a method of enhancing a particular class of devices (methods, or products) has been made part of the ordinary capabilities of one skilled in the art based upon the teaching of such improvement in other situations,” and “[o]ne of ordinary skill in the art would have been capable of applying this known method of enhancement to a ‘base’ device (method, or product) in the prior art and the results would have been predictable to one of ordinary skill in the art.” MPEP § 2143 (subsection (I.)(C.)); see also KSR Int’l Co. v. Teleflex, Inc., 550 U.S. 398, 417 (2007) and Intel Corp. v. PACT XPP Schweiz AG, 61 F.4th 1373, 1380–81. (Fed. Cir. 2023). In accordance with the guidance provided by MPEP § 2143 (subsection (I.)(C.)), the relevant factual findings that lead to this conclusion are supported by a preponderance of evidence for the following reasons: (1) The prior art contained a “base” device (method, or product) upon which the claimed invention can be seen as an “improvement.” The evidence for this finding is provided in the first half of this rejection, where each claim element is mapped to a respective element of Huang and Mohsin’s disclosures. The claimed invention can be seen as an “improvement” over Huang and Mohsin because those references are unable to obtain content from more than one source. (2) The prior art contained a “comparable” device (method, or product that is not the same as the base device) that has been improved in the same way as the claimed invention. The evidence for this finding is provided in the second half of this rejection, where each claim element is mapped to a respective element of Shapira’s disclosure. These mappings show that the Applicant’s “improvement” was already known via the Shapira reference prior to the effective filing date of the claimed invention. (3) One of ordinary skill in the art could have applied the known “improvement” technique in the same way to the “base” device (method, or product) and the results would have been predictable to one of ordinary skill in the art. The evidence for this finding is that nothing needs to be added or removed from either of the references in order to improve the base device. Accordingly, based on the above factual findings, the Office concludes that it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to improve Huang and Mohsin’s devices in the same way that Shapira improved its own computing device 200. Claim 14 Huang teaches the electronic device of claim 1, wherein the processor is configured to: display the keypad on the display; “FIG. 9 illustrates a text search query field 900. A text keyboard appears to enable a searching user to enter or input a text string to search for content items in the media content management system 100.” Huang ¶ 95. “As described above, a text string may be parsed into words and partial words and a search router rules engine 206 may identify one or more content associations that match the search terms.” Huang ¶ 95. perform an integrated search of content according to the keyword based on the predicted plurality of applications; “A search query processing screen 902 is provided in the dynamic keyboard interface to indicate to the user that the search is being processed.” Huang ¶ 95. In order to process the search, “[a] first candidate set of content items is determined 736 from the multitude of content items based on the expressive intent metadata content association associated with the one or more content items included in the candidate set.” Huang ¶ 87. and provide the search result through the keypad. “FIG. 9C illustrates media content items 104 matching the search term ‘Happy’ in the media content management system 100.” Huang ¶ 96. Huang does not explicitly disclose a plurality of applications for the content, and therefore, does not disclose predicting a plurality of different applications that in which to run the search. Shapira, however, teaches a system that substantially overlaps Huang with nearly all of the elements of claim 1, and further teaches a system configured to: display the keypad on the display; “For example, the user may enter a search query 262 into a search bar 252 (e.g., a search box) of the GUI 250 using a touchscreen keypad.” Shapira ¶ 40. predict a plurality of applications for an integrated As part of the search, search application 214 may gather and transmit “additional query parameters 264 to the search server 300 along with the search query 262,” including “a list of native application installed on the remote computing device 200.” Shapira ¶ 41. These are provided together with the search query, to guide the search query into returning state links that are relevant to applications the user currently has installed. The rationale to support this conclusion “is that a method of enhancing a particular class of devices (methods, or products) has been made part of the ordinary capabilities of one skilled in the art based upon the teaching of such improvement in other situations,” and “[o]ne of ordinary skill in the art would have been capable of applying this known method of enhancement to a ‘base’ device (method, or product) in the prior art and the results would have been predictable to one of ordinary skill in the art.” MPEP § 2143 (subsection (I.)(C.)); see also KSR Int’l Co. v. Teleflex, Inc., 550 U.S. 398, 417 (2007) and Intel Corp. v. PACT XPP Schweiz AG, 61 F.4th 1373, 1380–81. (Fed. Cir. 2023). In accordance with the guidance provided by MPEP § 2143 (subsection (I.)(C.)), the relevant factual findings that lead to this conclusion are supported by a preponderance of evidence for the following reasons: (1) The prior art contained a “base” device (method, or product) upon which the claimed invention can be seen as an “improvement.” The evidence for this finding is provided in the first half of this rejection, where each claim element is mapped to a respective element of Huang and Mohsin’s disclosures. The claimed invention can be seen as an “improvement” over Huang and Mohsin because those references are unable to obtain content from more than one source. (2) The prior art contained a “comparable” device (method, or product that is not the same as the base device) that has been improved in the same way as the claimed invention. The evidence for this finding is provided in the second half of this rejection, where each claim element is mapped to a respective element of Shapira’s disclosure. These mappings show that the Applicant’s “improvement” was already known via the Shapira reference prior to the effective filing date of the claimed invention. (3) One of ordinary skill in the art could have applied the known “improvement” technique in the same way to the “base” device (method, or product) and the results would have been predictable to one of ordinary skill in the art. The evidence for this finding is that nothing needs to be added or removed from either of the references in order to improve the base device. Accordingly, based on the above factual findings, the Office concludes that it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to improve Huang and Mohsin’s devices in the same way that Shapira improved its own computing device 200. Claim 18 Huang, as combined with Mohsin, teaches the method of claim 15. Huang further teaches: calling the keypad based on the detecting of the first input while displaying the execution screen; controlling the display of the electronic device to display the keypad in at least a partial area of the execution screen; “FIG. 9 illustrates a text search query field 900. A text keyboard appears to enable a searching user to enter or input a text string to search for content items in the media content management system 100.” Huang ¶ 95. FIG. 9A further shows that the text messaging application’s screen is displayed in the top half of the display, when the text keyboard is summoned. receiving a keyword and a search command as the second input while displaying the keypad through the display; “In this example, the word ‘Happy’ is entered into the text search query field 900, as shown in FIG. 9B.” Huang ¶ 95. performing the search in based on the keyword; Once the search query is received from the user, it “may then be parsed 714 into one or more overlapping windows of content, where each window includes at least one word of the search query. Here, a word may include a portion of a word, such as ‘ha’ of the word ‘happy.’ A candidate set of media content items may be determined 716 from the media content items maintained 710 in the media content system based on at least one overlapping window of the one or more overlapping windows of content matching one or more expressive intent metadata content associations associated with the candidate set.” Huang ¶ 82. and acquiring the search result from the at least one application among the plurality of applications. “A first candidate set of media content items may be determined 724 from the media content items in the media content system based, at least in part, on at least one word of the search query matching an expressive intent metadata content association associated with the one or more media content items included in the candidate set.” Huang ¶ 84. Huang does not appear to explicitly disclose “a plurality” of applications generating the content, let alone classifying the content based on the plurality of applications. Shapira, however, teaches a method comprising: receiving a keyword and a search command as the second input while displaying the keypad through the display; “For example, the user may enter a search query 262 into a search bar 252 (e.g., a search box) of the GUI 250 using a touchscreen keypad.” Shapira ¶ 40. performing the search in a plurality of applications based on the keyword; As part of the search, search application 214 may gather and transmit “additional query parameters 264 to the search server 300 along with the search query 262,” including “a list of native application installed on the remote computing device 200.” Shapira ¶ 41. These are provided together with the search query, to guide the search query into returning state links that are relevant to applications the user currently has installed. The rationale to support this conclusion “is that a method of enhancing a particular class of devices (methods, or products) has been made part of the ordinary capabilities of one skilled in the art based upon the teaching of such improvement in other situations,” and “[o]ne of ordinary skill in the art would have been capable of applying this known method of enhancement to a ‘base’ device (method, or product) in the prior art and the results would have been predictable to one of ordinary skill in the art.” MPEP § 2143 (subsection (I.)(C.)); see also KSR Int’l Co. v. Teleflex, Inc., 550 U.S. 398, 417 (2007) and Intel Corp. v. PACT XPP Schweiz AG, 61 F.4th 1373, 1380–81. (Fed. Cir. 2023). In accordance with the guidance provided by MPEP § 2143 (subsection (I.)(C.)), the relevant factual findings that lead to this conclusion are supported by a preponderance of evidence for the following reasons: (1) The prior art contained a “base” device (method, or product) upon which the claimed invention can be seen as an “improvement.” The evidence for this finding is provided in the first half of this rejection, where each claim element is mapped to a respective element of Huang and Mohsin’s disclosures. The claimed invention can be seen as an “improvement” over Huang and Mohsin because those references are unable to obtain content from more than one source. (2) The prior art contained a “comparable” device (method, or product that is not the same as the base device) that has been improved in the same way as the claimed invention. The evidence for this finding is provided in the second half of this rejection, where each claim element is mapped to a respective element of Shapira’s disclosure. These mappings show that the Applicant’s “improvement” was already known via the Shapira reference prior to the effective filing date of the claimed invention. (3) One of ordinary skill in the art could have applied the known “improvement” technique in the same way to the “base” device (method, or product) and the results would have been predictable to one of ordinary skill in the art. The evidence for this finding is that nothing needs to be added or removed from either of the references in order to improve the base device. Accordingly, based on the above factual findings, the Office concludes that it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to improve Huang and Mohsin’s devices in the same way that Shapira improved its own computing device 200. Claim 21 Huang and Mohsin teach the electronic device of claim 1, but not the additional elements of claim 21. Shapira however, teaches: wherein performing, in response to the second input, the search using at least one application installed in the electronic device, further comprises selecting the at least one application from a plurality of search engines installed in the electronic device based on the second input. “The search server 300 can perform one or more types of searches to determine a set of third party applications based on the query parameters 262, 264. For example, if the search server 300 receives the query ‘make a reservation at French Laundry at 7:00 PM,’ the search server 300 can identify that ‘French Laundry’ has an entity type of restaurant and process the rest of the query terms to determine that the user desires functionality that allows the user to make reservations. The search server 300 can then identify a set of third party applications that can be utilized to make restaurant reservations.” Shapira ¶ 51. The rationale to support this conclusion “is that a method of enhancing a particular class of devices (methods, or products) has been made part of the ordinary capabilities of one skilled in the art based upon the teaching of such improvement in other situations,” and “[o]ne of ordinary skill in the art would have been capable of applying this known method of enhancement to a ‘base’ device (method, or product) in the prior art and the results would have been predictable to one of ordinary skill in the art.” MPEP § 2143 (subsection (I.)(C.)); see also KSR Int’l Co. v. Teleflex, Inc., 550 U.S. 398, 417 (2007) and Intel Corp. v. PACT XPP Schweiz AG, 61 F.4th 1373, 1380–81. (Fed. Cir. 2023). In accordance with the guidance provided by MPEP § 2143 (subsection (I.)(C.)), the relevant factual findings that lead to this conclusion are supported by a preponderance of evidence for the following reasons: (1) The prior art contained a “base” device (method, or product) upon which the claimed invention can be seen as an “improvement.” The evidence for this finding is provided in the first half of this rejection, where each claim element is mapped to a respective element of Huang and Mohsin’s disclosures. The claimed invention can be seen as an “improvement” over Huang and Mohsin because those references are unable to obtain content from more than one source. (2) The prior art contained a “comparable” device (method, or product that is not the same as the base device) that has been improved in the same way as the claimed invention. The evidence for this finding is provided in the second half of this rejection, where each claim element is mapped to a respective element of Shapira’s disclosure. These mappings show that the Applicant’s “improvement” was already known via the Shapira reference prior to the effective filing date of the claimed invention. (3) One of ordinary skill in the art could have applied the known “improvement” technique in the same way to the “base” device (method, or product) and the results would have been predictable to one of ordinary skill in the art. The evidence for this finding is that nothing needs to be added or removed from either of the references in order to improve the base device. Accordingly, based on the above factual findings, the Office concludes that it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to improve Huang and Mohsin’s devices in the same way that Shapira improved its own computing device 200. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Justin R. Blaufeld whose telephone number is (571)272-4372. The examiner can normally be reached M-F 9:00am - 4:00pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, James K Trujillo can be reached at (571) 272-3677. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. Justin R. Blaufeld Primary Examiner Art Unit 2151 /Justin R. Blaufeld/Primary Examiner, Art Unit 2151
Read full office action

Prosecution Timeline

Sep 19, 2023
Application Filed
May 03, 2024
Non-Final Rejection — §103
Jul 09, 2024
Applicant Interview (Telephonic)
Jul 12, 2024
Examiner Interview Summary
Aug 05, 2024
Response Filed
Oct 31, 2024
Final Rejection — §103
Dec 27, 2024
Request for Continued Examination
Jan 07, 2025
Response after Non-Final Action
Feb 04, 2025
Non-Final Rejection — §103
Jun 02, 2025
Response Filed
Dec 09, 2025
Request for Continued Examination
Dec 20, 2025
Response after Non-Final Action
Jan 14, 2026
Response after Non-Final Action
Jan 17, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598356
System and Method for Analyzing Videos
2y 5m to grant Granted Apr 07, 2026
Patent 12596870
SYSTEM AND METHOD FOR FACT-CHECKING COMPLEX CLAIMS WITH PROGRAM-GUIDED REASONING
2y 5m to grant Granted Apr 07, 2026
Patent 12589692
APPARATUS FOR DRIVER ASSISTANCE AND METHOD OF CONTROLLING THE SAME
2y 5m to grant Granted Mar 31, 2026
Patent 12566533
METHOD, APPARATUS, AND ELECTRONIC DEVICE FOR GENERATING A REMOTE CONTROL APPLICATION
2y 5m to grant Granted Mar 03, 2026
Patent 12568132
METHOD OF ADDING LANGUAGE INTERPRETER DEVICE TO VIDEO CALL
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
47%
Grant Probability
80%
With Interview (+32.5%)
3y 5m
Median Time to Grant
High
PTA Risk
Based on 500 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month