Prosecution Insights
Last updated: April 19, 2026
Application No. 18/964,184

ELECTRONIC DEVICE, METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM FOR PROCESSING TEXT CONTAINED WITHIN TEXT INPUT PORTION OF USER INTERFACE

Non-Final OA §103
Filed
Nov 29, 2024
Examiner
WEHOVZ, OSCAR
Art Unit
2161
Tech Center
2100 — Computer Architecture & Software
Assignee
Samsung Electronics Co., Ltd.
OA Round
1 (Non-Final)
62%
Grant Probability
Moderate
1-2
OA Rounds
2y 5m
To Grant
91%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
63 granted / 101 resolved
+7.4% vs TC avg
Strong +28% interview lift
Without
With
+28.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
17 currently pending
Career history
118
Total Applications
across all art units

Statute-Specific Performance

§101
9.1%
-30.9% vs TC avg
§103
71.5%
+31.5% vs TC avg
§102
3.4%
-36.6% vs TC avg
§112
11.8%
-28.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 101 resolved cases

Office Action

§103
DETAILED ACTION This action is responsive to application filed on November 29, 2024. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55 Information Disclosure Statement As required by M.P.E.P. 609, the applicant’s submission of the Information Disclosure Statement dated November 29, 2024 is acknowledged by the examiner and the cited references have been considered in the examination of the claims now pending. Specification Applicant is reminded of the proper language and format for an abstract of the disclosure. The abstract should be in narrative form and generally limited to a single paragraph on a separate sheet within the range of 50 to 150 words in length. The abstract should describe the disclosure sufficiently to assist readers in deciding whether there is a need for consulting the full patent text for details. The language should be clear and concise and should not repeat information given in the title. It should avoid using phrases which can be implied, such as, “The disclosure concerns,” “The disclosure defined by this invention,” “The disclosure describes,” etc. In addition, the form and legal phraseology often used in patent claims, such as “means” and “said,” should be avoided. The abstract of the disclosure is objected to because it exceeds 150 words in length. Correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 4-5, 7, 9, 14 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Jeong (US Patent Application Publication No. US 20190369825 A1), in view of Kim (US Patent Publication No. US 20160065648 A1). Regarding claim 1, Jeong teaches an electronic device comprising: a display; memory comprising one or more storage media storing one or more computer programs; and one or more processors comprising processing circuitry, wherein the one or more computer programs include computer-executable instructions that, when executed by the one or more processors individually or collectively, cause the electronic device to: based on a recognition of media content stored in the electronic device, obtain information regarding the media content, (See Jeong [0005] “an electronic device comprises a memory, a display, and at least one processor, wherein the at least one processor is configured to… identify one or more images stored in the memory… recognize at least a portion of content included in a selected image among thesome of the one or more images, provide character information to the application based on the recognized at least the portion of the content as a portion of the user input through the input unit. [Thus, based on a recognition of media content stored in the electronic device, obtain information regarding the media content]”) identify an event providing the media content from a first software application to a second software application, (See Jeong [0006, 0101] “when executing the instructions display a user interface of an application [e.g. first application]… in response to identification of an input [e.g. event] performed on a text-input portion included in the user interface, identify one or more images related to an application [e.g. second application] among a plurality of applications stored in the electronic device… the text-input portion may be used to input text (or characters) for executing a predetermined function in the application… may be used to provide a retrieval function in the application… the retrieval function may be a function for retrieving at least one piece of data that is stored in the electronic device 101 and is related to the application or external data of the electronic device 101.” See also Jeong [0113] “According to certain embodiments, the application may be an application [e.g. first software application] distinct from another application [e.g. second software application] used to control a virtual keyboard. According to certain embodiments, the application may be an application that can interwork with the other application.” See also Jeong [0059] “The application 146 may include, for example, a home 251, dialer 253, short message service (SMS)/multimedia messaging service (MMS) 255, instant message (IM) 257, browser 259, camera 261, alarm 263, contact 265… the application 146 may further include an information exchanging application (not shown) that is capable of supporting information exchange between the electronic device 101 and the external electronic device. The information exchange application, for example, may include a notification relay application adapted to transfer designated information… corresponding to an occurrence of a specified event (e.g., receipt of an email) at another application (e.g., the email application 269) of the electronic device 101 to the external electronic device.”) However, Kim teaches identify an event providing the media content from a first software application to a second software application in more details. (See Kim abstract “identifying data of a first application, determining at least one second application in response to a user event and identifying an attribute of each of the at least one second application, and processing the data of the first application based on the attributes of the each of the at least one second application to execute a function related to at least one second application.” See also Kim [0063, 0066] “The data transferal control module 170 may transfer data of a first application to a second application in response to a user event [Thus, identify an event providing the media content from a first software application to a second software application], and may execute a function related to the second application using the transferred data… the data [e.g. media content] of the first application may include at least one of data displayed on the window of the first application, which is obtained through an input, various pieces of raw data of the first application, processed data, a link, a thumbnail image, a text obtained by an optical character recognition (OCR) or effective information.” See also Kim [0097] “the electronic device 101 may execute the function related to the second application using the data of the first application… For example, when the second application is a document editing application and the attribute of data of the second application, which is selected by the user event, is information a character font, the electronic device 101 may display a text included in the data of the first application on a document editing window of the document editing application in a character font size corresponding to the selected data.” Examiner notes that based on the Specification paragraph [0042] “the term "media content" used in this document may include data, digital code, text, sound, audio, image, graphics, text, video, or any other similar material.”, the interpretation of the term “media content” may include data.) It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify Jeong; which identifies an input [e.g. event] portion within an application [e.g. first application] to identify data related to an application [e.g. second application] among a plurality of applications, where an application that can interwork/transfer information with other applications, to incorporate the teachings of Kim which determine at least one second application in response to the user event, identifying an attribute of each of the at least one second application and transferring data between applications. One would be motivated to do so to improve user experience [Kim 0011], Thus, removes the need for manually re-enter data, significantly increasing convenience and efficiency. Jeong further in view of Kim, [hereinafter Jeong-Kim] additionally disclose identify attribute information of a text input portion in a user interface of the second software application, (See Jeong [0086, 0106] “the processor may be configured to identify other character information provided by the application through the input unit [e.g. in a user interface of the second software application] and store the other character information as at least a portion of attribute information [Thus, identify attribute information of a text input portion] of the selected image… the character information may be at least one keyword (or text) that can be used for the image-based retrieval service in order to retrieve other information.”) Kim also teaches identify attribute information of a text input portion in a user interface of the second software application. (See Kim [0096] “the electronic device 101 may determine data of the second application in response to the user event, and may identify the attribute of the determined data [e.g. text input portion] of the second application. For example, the electronic device 101 may receive a user event which shifts the first application window to data displayed on the second application window [Thus, in a user interface of the second software application]. The electronic device 101 may identify the attribute (e.g., information included in the data) of the data.” See also Kim [0117-0118] “ the electronic device 101 may determine the function related to the second application in response to the user event and may identify the attribute of the determined function related to the second application. For example, the electronic device 101 may receive a user event which shifts the first application window to a search application icon. The electronic device 101 may execute a search engine in response to the user event and may identify information on an instruction of a performance of the search function… when the electronic device 101 executes the search engine in response to the user event and identifies the information on the instruction of the performance of the search function, the electronic device 101 may perform a search through a search word [e.g. text input portion] based on a text included in the data of the first application.”) obtain text indicating at least a portion of the information, based on the attribute information, and (See Jeong [0086, 0106] “the processor may be configured to identify other character information provided by the application through the input unit and store the other character information as at least a portion of attribute information of the selected image… the character information may be at least one keyword (or text) that can be used for the image-based retrieval service in order to retrieve other information [Thus, obtain text indicating at least a portion of the information, based on the attribute information].”) Kim also teaches obtain text indicating at least a portion of the information, based on the attribute information. (See Kim [0097] “when the second application is a document editing application and the attribute of data [Thus, based on the attribute information] of the second application, which is selected by the user event, is information a character font, the electronic device 101 may display a text included in the data of the first application on a document editing window of the document editing application in a character font size corresponding to the selected data [Thus, obtain text indicating at least a portion of the information, based on the attribute information].”) display, via the display, a text input portion including the text, with the media content, in the user interface of the second software application executed in response to the event. (See Kim [0096-0097] “ the electronic device 101 may receive a user event which shifts the first application window to data displayed on the second application window… when the second application is a document editing application and the attribute of data of the second application, which is selected by the user event, is information a character font, the electronic device 101 may display a text included in the data of the first application on a document editing window of the document editing application [Thus, display, via the display, a text input portion including the text, with the media content, in the user interface of the second software application executed in response to the event]”) Regarding claim 2, Jeong-Kim teaches all limitations and motivations of claim 1, wherein the attribute information includes data indicating that the text input portion is associated with a third software application different from the first software application and the second software application, and (See Kim [0056] “ According to various embodiments of the present disclosure, the one or more application 134 may include a short message service (SMS)/multimedia message service (MMS) application, an e-mail application, a calendar application, an alarm application, a health care application [Thus, different applications]” See also Kim [0135] “The determining of at least one second application in response to the user event according to an embodiment of the present disclosure may include determining at least one second application [e.g. third software application different from the first software application and the second software application] among the executed applications, based on an execution state attribute of the executed applications” See also Kim abstract, [0117-0118] “identifying data of a first application, determining at least one second application in response to a user event and identifying an attribute of each of the at least one second application [e.g. third application different from the first software application and the second software application], and processing the data of the first application based on the attributes of the each of the at least one second application to execute a function related to at least one second application… For example, the electronic device 101 may receive a user event which shifts the first application window to a search application icon. The electronic device 101 may execute a search engine in response to the user event and may identify information on an instruction of a performance of the search function… the electronic device 101 may execute the function related to the second application [e.g. third application] using the data of the first application and based on the attribute of the function related to the selected second application [Thus, data indicating that the text input portion is associated with a third software application]. For example, when the electronic device 101 executes the search engine in response to the user event and identifies the information on the instruction of the performance of the search function, the electronic device 101 may perform a search through a search word [e.g. text input portion] based on a text included in the data of the first application.”) wherein the text is obtained by searching, using the at least a portion of the information, a database stored in a storage region allocated for the third software application identified based on the data. (See Jeong [0034-0037, 0055] “The processor 120 may execute, for example, software (e.g., a program 140) to control at least one other component (e.g., a hardware or software component) of the electronic device 101… The program 140 may be stored in the memory 130 as software, and may include, for example, an operating system (OS) 142, middleware 144, or an application 146… The middleware 144 may include… a database manager 211” See also Jeong [0057, 0068-0071] “The database manager 211, for example, may generate, search, or change a database to be used by the application 146… the electronic device 101 may include memory storing a virtual keyboard application 291 used by the processor 120, a plurality of applications 292 distinct from the virtual keyboard application 291, a database 293 [Thus, a database stored in a storage region]… The virtual keyboard application 291 may interwork with a recommended word database stored in the memory 130. The recommended word database may provide a predicted word (or text) [Thus, text is obtained by searching] when using the virtual keyboard application 291… the word may include text related to the image-based retrieval service described with reference to the drawings from FIG. 3A… the database 293 may be used to store resources for providing the image-based retrieval service through interworking between the virtual keyboard application 291 and each of the plurality of applications 292 [e.g. third application].”) Regarding claim 4, Jeong-Kim teaches all claim limitations and motivations of claim 1, wherein the one or more computer programs further include computer-executable instructions that, when executed by the one or more processors individually or collectively, cause the electronic device to: identify, from among a plurality of visual objects, a visual object corresponding to at least another portion of the information; and (See Jeong [0103-0105] “the content may be configured in various formats. For example, the content may be configured with at least one character and/or at least one visual object... In operation 320, the processor 120 may acquire recognition information of at least a portion of the content included in the selected image… the processor 120 may extract at least one visual object from the selected image, identify at least one feature point from at least one extracted visual object, and generate the recognition information on the basis of the at least one feature point so as to acquire the recognition information… the information [e.g. another portion of the information] on the selected image transmitted by the processor 120 may include information on at least one visual object extracted from the selected image. [Thus, identify, from among a plurality of visual objects, a visual object corresponding to at least another portion of the information]”) display, via the display, in the user interface, with the media content and the text input portion that includes the text, the visual object. (See Jeon [0095, 0106-0107] “the at least one processor is configured to display a first thumbnail image for representing a first image among a plurality of images stored in the electronic device along with a first user interface… provide content [e.g. visual object] retrieved based at least on the first image within the first user interface… the processor 120 may acquire character information corresponding to the recognition information on the basis of at least the acquisition… character information may be at least one keyword (or text) that can be used for the image-based retrieval service in order to retrieve other information… the processor 120 may provide the character information to the application through the input unit as at least a portion of the user input. According to certain embodiments, the processor 120 may provide the character information to the application by inputting (or inserting) the character information into the character input portion included in the user interface of the application. [Thus, display, via the display, in the user interface, with the media content and the text input portion that includes the text, the visual object]”) Regarding claim 5, Jeong-Kim teaches all claim limitations and motivations of claim 1, wherein the one or more computer programs further include computer-executable instructions that, when executed by the one or more processors individually or collectively, cause the electronic device to: display, via the display, in a user interface of the first software application, the media content, with an executable object for a function provided via a framework; based at least in part on an input on the executable object, identify the event; using the second software application executed in response to the event, identify the attribute information; and using the second software application, obtain the text. (See Kim [0094-0097] “FIG. 8 is a flowchart [e.g. framework] illustrating a method of executing a function related to an application [e.g. an executable object for a function provided via a framework] of an electronic device… the electronic device 101 may identify the data of the first application. For example, the electronic device 101 may execute the first application, and may generate various pieces of data in the executed first application… the electronic device 101 may receive a user event [e.g. based at least in part on an input on the executable object, identify the event] which shifts the first application window [Thus, display, via the display, in a user interface of the first software application, the media content] to data displayed on the second application window. The electronic device 101 may identify the attribute (e.g., information included in the data) of the data… the electronic device 101 may execute the function related to the second application using the data of the first application and based on the identified attribute of the data of the second application [Thus, using the second software application executed in response to the event, identify the attribute information]. For example, when the second application is a document editing application and the attribute of data of the second application, which is selected by the user event, is information a character font, the electronic device 101 may display a text included in the data of the first application on a document editing window of the document editing application in a character font size corresponding to the selected data. [Thus, using the second software application, obtain the text]” Examiner notes that “a framework” is a very broad term, and the Specification does not provide any specific definition. The plain meaning of a framework is a basic structure of ideas or steps that provide support to a process. Kim’s flowchart illustrating a method of executing a function related to an application is interpreted as a framework.) Regarding claim 7, Jeong-Kim teaches all limitations and motivations of claim 1, wherein the one or more computer programs further include computer-executable instructions that, when executed by the one or more processors individually or collectively, cause the electronic device to: display, via the display, in a user interface of the first software application, with the media content, another text; and in response to the event identified while displaying the another text with the media content, further based on at least a portion of the another text, obtain the text. (See Jeon [0095, 0103-0107] “the at least one processor is configured to display a first thumbnail image for representing a first image among a plurality of images stored in the electronic device along with a first user interface… provide content [e.g. another text] retrieved based at least on the first image within the first user interface… the content may be configured in various formats. For example, the content may be configured with at least one character [e.g. text] and/or at least one visual object… the processor 120 may acquire recognition information of at least a portion of the content included in the selected image… the processor 120 may acquire character information corresponding to the recognition information on the basis of at least the acquisition. According to certain embodiments, the character information may be at least one keyword (or text) that can be used for the image-based retrieval service in order to retrieve other information… the processor 120 may provide the character information [e.g. the text] to the application through the input unit as at least a portion of the user input [e.g. event]. According to certain embodiments, the processor 120 may provide the character information to the application by inputting (or inserting) the character information into the character input portion included in the user interface of the application. [Thus, displaying the another text with the media content, further based on at least a portion of the another text, obtain the text]”) Regarding claim 9, Jeong-Kim teaches all limitations and motivations of claim 1, wherein the one or more computer programs further include computer-executable instructions that, when executed by the one or more processors individually or collectively, cause the electronic device to: identify that another information is at least partially associated with the information, the another information obtained via a third software application before the event is identified; and further based on the another information, obtain the text. (See Jeong [0130] “According to certain embodiments, the recognition information [e.g. another information partially associated with the information] may include data acquired by applying Optical Character Reader (OCR) [e.g. obtained via a third software application before the event] to text included in the image” See also Jeong [0105-0107] “the processor 120 may acquire recognition information of at least a portion of the content included in the selected image. The image can be selected according to a predetermined input… the processor 120 may acquire character information corresponding to the recognition information on the basis of at least the acquisition. According to certain embodiments, the character information may be at least one keyword (or text) [Thus, further based on the another information, obtain the text] that can be used for the image-based retrieval service in order to retrieve other information… the processor 120 may provide the character information to the application through the input unit as at least a portion of the user input. According to certain embodiments, the processor 120 may provide the character information to the application by inputting (or inserting) [e.g. event] the character information into the character input portion included in the user interface of the application.” See also Jeong [0137] “The image file 541 may include source information 542 of the image, scene information 543 of the image, location information 544 indicating the location of the electronic device 101 at the time at which the image was acquired [Thus, before the event is identified], OCR information 545 on the result generated by applying OCR to the image, category information 546 of the image, and relevant app information 547 of the image as well as information on the image. The source information 542, the scene information 543, the location information 544, the OCR information 545, the category information 546, and the relevant app information 547 may be included in metadata (or header information) within the image file 541.”) Regarding claim 14, Jeong-Kim teaches all limitations and motivations of claim 1, wherein the one or more computer programs further include computer-executable instructions that, when executed by the one or more processors individually or collectively, cause the electronic device to: based on the recognition of the media content, obtain the information including data regarding at least one object included in a predetermined region from among objects in the media content; and further based on the data regarding the at least one object, obtain the text. (See Kim [0066] “the data generating unit 171 may generate various pieces of data [e.g. information including data] of the first application. For example, the data of the first application may include at least one of data displayed on the window [e.g. included in a predetermined region] of the first application, which is obtained through an input, various pieces of raw data of the first application, processed data, a link, a thumbnail image [e.g. at least one object from among objects in the media content], a text obtained by an optical character recognition (OCR) [Thus, based on the recognition of the media content] or effective information.” See also Kim [0074] “The control module according to an embodiment of the present disclosure may generate the data of the first application as a thumbnail image, a text obtained by an OCR [Thus, further based on the data regarding the at least one object, obtain the text]”) Regarding claim 19, Jeong-Kim teaches all of the elements of claim 1 in system form. Therefore, the supporting rationale of the rejection to claim 1 applies equally as well to those elements of claim 19. Regarding claim 20, Jeong-Kim teaches all of the elements of claim 1 in system form. Therefore, the supporting rationale of the rejection to claim 1 applies equally as well to those elements of claim 20. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Jeong-Kim, in view of Park (US Patent Publication No. US 20160350060 A1). Regarding claim 3, Jeon-Kim teaches all limitations and motivations of claim 1. Jeong-Kim does not explicitly disclose the attribute information includes data indicating a maximum number of characters. However, Park teaches wherein the attribute information includes data indicating a maximum number of characters capable of being input in the text input portion, and wherein the text is obtained by identifying the at least a portion of the information based on the data. (See Park [0013, 0135] “The electronic device includes a first display having a first size, a processor, and a memory storing instructions thereon that when executed allow the processor to receive, from an external device that includes a second display having a second size, a request for obtaining a text input and supplementary information [e.g. attribute information] related to the text input, display, through the first display, a user interface for obtaining the text input, and in response to receiving input information in the user interface… wherein the supplementary information contains at least one of the type of keypad related to text input, the type of input item, keypad language, the security level, the requested text string length, the maximum enterable length [e.g. a maximum number of characters capable of being input in the text input portion], the non-enterable characters, the text that is pre-input into the input window, or information on an application [Thus, the text is obtained by identifying the at least a portion of the information based on the data].” See also Park [0189] “The maximum enterable length refers to the maximum number of characters that can be input into a single input item. The maximum enterable length may be determined according to the type of input item. For example, if the type of input item is a telephone number, the maximum enterable length may be 10 to 12 characters, and if the type of input item is a password, the maximum enterable length may be 6 to 14 characters.”) It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify Jeong-Kim; which uses a database manager to search a database to obtain a predicted word, to incorporate the teachings of Park which obtains supplementary information related to the text input including a maximum enterable length for the type of data. One would be motivated to do so to ensure that data entered conforms to the expected format and limitations. Claims 6, 11-12 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Jeong-Kim, in view of Paul (US Patent Publication No. US 20220334683 A1). Regarding claim 6, Jeong-Kim teaches all limitations of claim 1. Jeong teaches that images are classified based on categories. (See Jeong [0138] “the processor 120 may classify the image file and a first image file of the image files as a first category, among a plurality of categories, and classify the image file and a second image file of the image files as a second category, among the plurality of categories.”) Jeong-Kim does not explicitly disclose display items respectively indicating categories of the media content. However, Paul teaches display, via the display, in a user interface of the first software application, items respectively indicating categories of the media content, with the media content identified from among a plurality of media contents stored in the electronic device; and in response to the event identified while displaying the items with the media content, further based on at least a portion of the categories, obtain the text. (See Paul [0452-0454, 0465-0469] “As illustrated in FIG. 12A, application control region 726 includes thumbnail media representations 712 that are displayed [e.g. display, via the display, in a user interface of the first software application] in a single row. Thumbnail media representations 712 [e.g. plurality of media contents stored in the electronic device] of FIG. 12A include thumbnail representations 1212 a-1212 b… computer system 600 detects rightward swipe input 1250 a in media viewer region 724… in response to detecting rightward swipe input 1250 a [Thus, in response to the event identified while displaying the items with the media content], computer system 600 displays enlarged representation 1224 b and ceases to display enlarged representation 1224 a in media viewer region 724… a determination is made that enlarged representation 1224 b includes at least one detected feature (e.g., shirt 1232, dandelion 1234, book 1236, dog 1238, dog 1240, and lavender plant 1242) [e.g. items respectively indicating categories of the media content] that belongs to one or more of a set of predetermined categories of features (e.g., and/or is one of a predetermined types of features)… At FIG. 12E, computer system 600 detects tap input 1250 e on feature indicator 1260 a [Thus, further based on at least a portion of the categories]… As illustrated in FIG. 12F, in response to detecting tap input 1250 e, computer system 600 displays feature card 1270… Feature card 1270 includes exit control 1266, feature image 1270 a, feature identifier 1270 b, feature information 1270 c, and feature information 1270 d… Feature identifier 1270 b includes a description of the feature (“Lavender Plant”). Feature information 1270 c includes information concerning the feature (“PLANT GENIUS”) and, in some embodiments, denotes the category of the feature (e.g., lavender plant 1242) that corresponds to feature card 1270. Feature information 1270 d includes additional information concerning the feature.”) It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify Jeong-Kim; which classifies images using categories, to incorporate the teachings of Paul of determining and displaying detected features in media corresponding to predetermined categories. One would be motivated to do so to improve content searchability leading to more relevant results. Regarding claim 11, Jeong-Kim teaches all limitations of claim 1. Jeong teaches that images are classified based on categories. (See Jeong [0138] “the processor 120 may classify the image file and a first image file of the image files as a first category, among a plurality of categories, and classify the image file and a second image file of the image files as a second category, among the plurality of categories.”) Jeong-Kim does not explicitly disclose display items respectively indicating categories of the media content. However, Paul teaches display, via the display, in a user interface of the first software application, items respectively indicating categories of a plurality of media contents stored in the electronic device and at least a portion of the plurality of media contents; identify the event while the media content, identified from among the plurality of media contents based on a user input regarding at least one item from among the items, is displayed in the user interface of the first software application; and based on the event, obtain the text further based on at least one category indicated by the at least one item. (See Paul [0452-0454, 0465-0469] “As illustrated in FIG. 12A, application control region 726 includes thumbnail media representations 712 that are displayed [e.g. display, via the display, in a user interface of the first software application] in a single row. Thumbnail media representations 712 [e.g. plurality of media contents stored in the electronic device] of FIG. 12A include thumbnail representations 1212 a-1212 b… computer system 600 detects rightward swipe input 1250 a in media viewer region 724 [Thus, identify the event while the media content, identified from among the plurality of media contents based on a user input regarding at least one item from among the items, is displayed in the user interface of the first software application]… in response to detecting rightward swipe input 1250 a [Thus, based on the event], computer system 600 displays enlarged representation 1224 b and ceases to display enlarged representation 1224 a in media viewer region 724… a determination is made that enlarged representation 1224 b includes at least one detected feature (e.g., shirt 1232, dandelion 1234, book 1236, dog 1238, dog 1240, and lavender plant 1242) [e.g. items respectively indicating categories of a plurality of media contents] that belongs to one or more of a set of predetermined categories of features (e.g., and/or is one of a predetermined types of features)… At FIG. 12E, computer system 600 detects tap input 1250 e on feature indicator 1260 a [Thus, further based on at least one category indicated by the at least one item]… As illustrated in FIG. 12F, in response to detecting tap input 1250 e, computer system 600 displays feature card 1270… Feature card 1270 includes exit control 1266, feature image 1270 a, feature identifier 1270 b, feature information 1270 c, and feature information 1270 d… Feature identifier 1270 b includes a description of the feature (“Lavender Plant”). Feature information 1270 c includes information concerning the feature (“PLANT GENIUS”) and, in some embodiments, denotes the category of the feature (e.g., lavender plant 1242) that corresponds to feature card 1270. Feature information 1270 d includes additional information concerning the feature [Thus, obtain the text].”) It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify Jeong-Kim; which classifies images using categories, to incorporate the teachings of Paul of determining and displaying detected features in media corresponding to predetermined categories. One would be motivated to do so to improve content searchability leading to more relevant results. Regarding claim 12, Jeong-Kim further in view of Paul, teaches all limitations and motivations of claim 11, wherein the at least one item selected by the user input is visually highlighted relative to remaining items from among the items. (See Kim Fig. 14A, [0113] “ For example, when the display 150 is a touch screen, the user may drag the memo write windows 1410 a and 1410 b to the random electronic device information 1421 a and 1421 b of the bluetooth information window 1420 a or the Wi-Fi information window 1420 b.” PNG media_image1.png 690 421 media_image1.png Greyscale Thus, at least one item selected by the user input is visually highlighted relative to remaining items from among the items.) Regarding claim 15, Jeong-Kim teaches all limitations and motivations of claim 1. Jeong teaches that images are classified based on categories. (See Jeong [0138] “the processor 120 may classify the image file and a first image file of the image files as a first category, among a plurality of categories, and classify the image file and a second image file of the image files as a second category, among the plurality of categories.”) Jeong-Kim does not explicitly disclose displaying the media content included in a classification from among classifications used in the first software application. However, Paul teaches while displaying, in a user interface of the first software application, the media content included in a classification from among classifications used in the first software application; and further based on a name of the classification including the media content, obtain the text. (See Paul [0452-0454, 0465-0469] “As illustrated in FIG. 12A, application control region 726 includes thumbnail media representations 712 that are displayed [e.g. while displaying, in a user interface of the first software application] in a single row. Thumbnail media representations 712 [e.g. plurality of media contents stored in the electronic device] of FIG. 12A include thumbnail representations 1212 a-1212 b… computer system 600 detects rightward swipe input 1250 a in media viewer region 724… in response to detecting rightward swipe input 1250 a, computer system 600 displays enlarged representation 1224 b and ceases to display enlarged representation 1224 a in media viewer region 724… a determination is made that enlarged representation 1224 b includes at least one detected feature (e.g., shirt 1232, dandelion 1234, book 1236, dog 1238, dog 1240, and lavender plant 1242) [e.g. the media content included in a classification from among classifications used in the first software application] that belongs to one or more of a set of predetermined categories [e.g. classifications] of features (e.g., and/or is one of a predetermined types of features)… At FIG. 12E, computer system 600 detects tap input 1250 e on feature indicator 1260 a [Thus, further based on a name of the classification including the media content]… As illustrated in FIG. 12F, in response to detecting tap input 1250 e, computer system 600 displays feature card 1270… Feature card 1270 includes exit control 1266, feature image 1270 a, feature identifier 1270 b, feature information 1270 c, and feature information 1270 d… Feature identifier 1270 b includes a description of the feature (“Lavender Plant”). Feature information 1270 c includes information concerning the feature (“PLANT GENIUS”) and, in some embodiments, denotes the category of the feature (e.g., lavender plant 1242) that corresponds to feature card 1270. Feature information 1270 d includes additional information concerning the feature [Thus, obtain the text].”) It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify Jeong-Kim; which classifies images using categories, to incorporate the teachings of Paul of determining and displaying detected features in media corresponding to predetermined categories. One would be motivated to do so to improve content searchability leading to more relevant results. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Jeong-Kim, in view of Kim (US Patent Publication No. US 20160350060 A1 – hereinafter Kwang). Regarding claim 8, Jeong-Kim teaches all limitations of claim 1, wherein the one or more computer programs further include computer-executable instructions that, when executed by the one or more processors individually or collectively, cause the electronic device to: while displaying the media content in a user interface of the first software application, identify the event; and in response to the event, further based on a name of a folder including an executable object for executing the first software application used for displaying the media content, obtain the text. (See Kim [0069] “An electronic device according to an embodiment of the present disclosure may include an input unit that receives a user event, a display that outputs a screen, and a control module that identifies data of a first application” See also Kim [0066-0068] “the data of the first application may include at least one of data displayed on the window of the first application… the function executing unit 173 may select one piece of data corresponding to the attribute of the second application among various pieces of data of the first application” See also Kim [0091] “The electronic device 101 may execute a browser application [e.g. first software application] in a state in which the message application is executed, and for example, the electronic device 101 may overlay a browser window 610 a with the message write window 620 to display the browser window 610 a [Thus, while displaying media content]. A user may perform websurfing through the browser window 610 a. As shown in FIG. 6C, the user may shift the browser window 610 a to the input window 621 of the message write window 620. For example, when the display 150 is a touch screen, the user may drag [e.g. event] the browser window 610 a to the input window 621 of the message write window 620. For example, while a drag input is provided, the electronic device 101 may display the browser window 610 b [Thus, further based on executing the first software application] in a predetermined minimized size, which is shifted correspondingly to the drag input. Referring to FIG. 6C, the electronic device 101 may identify that the input window 621 is an attribute capable of receiving a link object, and may display a URL address of a corresponding browser window in the input window 621 as a link object 610 c [Thus, in response to the event, obtain the text].” See also Kim [0122] “As shown in FIG. 17B, the user may shift the memo write window 1710 b to a random icon, e.g., a telephone application icon 1720, among application icons [Thus, including an executable object for executing software application (e.g. the first software application)] displayed on a home screen.”) PNG media_image2.png 677 395 media_image2.png Greyscale PNG media_image3.png 682 384 media_image3.png Greyscale Jeong-Kim does not explicitly disclose a name of a folder including an executable object for executing the first software application. However, Kwang teaches based on a name of a folder including an executable object for executing the first software application. (See Kwang abstract “The method for running an application in an electronic device includes displaying one application icon of one or more applications contained in a folder, in an icon of the folder, detecting a gesture to the folder icon, and running or changing the application displayed in the folder icon according to the gesture to the folder icon.” See also Kwang [0041-0042, 0048, 0059] “ When the applications in the folder are displayed, the touch detection program 115 can detect the touch on a folder name over a certain time… when the tap on the folder is detected, the folder management program 116 can open the corresponding folder and display the applications in the folder. For example, when the tap on the system folder is detected, the folder management program 116 can display the phone, address book, and text message applications [e.g. first software application] in the system folder [Thus, based on a name of a folder including an executable object for executing the first software application used for displaying the media content). Next, when the phone application is tapped [e.g. event] among the applications of the system folder, the folder management program 116 can run the phone application… in response to the touch input, the touch screen 130 provides the visual output to the user based on text, graphics, and video”) It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify Jeong-Kim; which disclose an electronic device capable of storing and executing multiple applications, to incorporate the teachings of Kwang of running an application contained in a folder. One would be motivated to do so to efficiently organize installed applications, preventing an overwhelming number of app icons from being scattered across multiple pages. Allowable Subject Matter Claims 10, 13 and 16-18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base and any intervening claims. After sufficient search and analysis, Examiner concluded that the claimed invention has been recited in such a manner that claims 10, 13 and 16-18 are not taught by any prior reference found through search. The primary reason for allowance of the claims in this case, is the inclusion of the limitations “identify that another information is at least partially associated with the information, the another information obtained via a third software application before the event is identified; display, via the display, in the user interface of the second software application executed in response to the event, with the media content, items respectively indicating keywords identified based on the text input portion including the text, and the another information; receive a user input regarding at least one item from among the items; and in response to the user input, based on at least one keyword indicated by the at least one item, change at least a portion of the text displayed in the user interface.”, “obtain the information by identifying categories of objects in the media content through the recognition of the media content; identify, from among the categories, a category including a largest number of objects; and further based on the identified category, obtain the text.”, “identify the event providing the media content and another media content from the first software application to the second software application; identify first text for the media content based on the information; identify second text for the other media content, based on other information obtained based on recognition of the other media content; identify an upper category including a category including a word in the first text and a category including a word in the second text; obtain the text including at least a portion of the information and at least a portion of the other information, further based on the upper category; and display the text input portion including the text, together with the media content and the other media content, through the display, within the user interface of the second software application.”, “display items respectively indicating the text input portion including the text and keywords identified based on the information, through the display, in the user interface of the second software application; display at least one item indicating at least one keyword indicating a changed portion of the text through the display together with the items, in response to a second user input for changing a portion of the text; and display other text within the text input portion through the display, based on the at least one item and a second user input for selecting at least a portion of the items.”, and “receive a first user input for selecting a word of the words in the text displayed in the user interface of the second software application; display items respectively indicating a category of the word, an upper category of the category, and a lower category of the category through the display, within the user interface of the second software application; and display the text in which the word is changed into at least another word, through the display within the text input portion, in response to a second user input for at least one of the items.” which are not found in the prior art of record. Incorporating allowable subject matter into independent claims would put claims in condition for allowance. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to OSCAR WEHOVZ whose telephone number is (571)272-3362. The examiner can normally be reached 8:00am - 5:00pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, APU M MOFIZ can be reached at (571) 272-4080. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /OSCAR WEHOVZ/Examiner, Art Unit 2161
Read full office action

Prosecution Timeline

Nov 29, 2024
Application Filed
Jan 28, 2026
Non-Final Rejection — §103
Mar 02, 2026
Interview Requested
Mar 31, 2026
Examiner Interview Summary
Mar 31, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602394
RESOURCE EFFICIENT FULL BOOTSTRAP
2y 5m to grant Granted Apr 14, 2026
Patent 12602442
MEDIA CONTENT PROCESSING METHOD AND APPARATUS, STORAGE MEDIUM, AND ELECTRONIC DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12596759
INSIGHTS SERVICE FOR SEARCH ENGINE UTILIZATION
2y 5m to grant Granted Apr 07, 2026
Patent 12591629
QUESTION ANSWERING USING ENTITY REFERENCES IN UNSTRUCTURED DATA
2y 5m to grant Granted Mar 31, 2026
Patent 12566819
SYSTEMS AND METHODS FOR CLUSTERING ALGORITHMS FOR DATA ANALYSIS
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
62%
Grant Probability
91%
With Interview (+28.3%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 101 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month