DETAILED ACTION
Status of the Application
This Final Office Action is in response to Application Serial 17/428,211. In response to the Examiner’s action mail dated, September 25, 2025, Applicant submitted arguments and amendments mail dated January 08, 2026. Claims 1 and 11 are amended. Claims 2, 4, 12, 14, 16, 17 remain cancelled. Claims 1, 3, 5-11, 13, 15, 18-19 are pending in this application and have been rejected below.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination if the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect. Or any correction of the statutory basis for the rejection will not be considered in a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on September 25, 2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Response to Amendments
Claims 1, 3, 5-11, 13, 15, 18-19 are pending in this application. The claims 1 and 11 are amended. Claims 2, 4, 12, 14, 16, 17 are cancelled.
Regarding the 35 U.S.C. 101 rejection, the amendments to claims 1, 3, 5-11, 13, 15, 18-19 are not persuasive. The claims 1, 3, 5-11, 13, and 15, 18-19 are rejected under 35 U.S.C 101.
Applicant is encouraged to request an interview.
Regarding the 35 U.S.C. 103 rejection, the amendments to are persuasive. Examiner withdraws the 35 U.S.C. 103 rejection.
Response to Arguments
Applicant’s arguments filed August 11, 2025 have been fully considered. The Applicant’s 35 U.S.C. 101 amendments are persuasive. The Applicant’s 35 U.S.C 103 are considered but they are not persuasive and/or moot in view of the revise rejections.
Claim Rejections under 35 U.S.C. 101
On pages 13-17 of the Applicant’s arguments, the Applicant traverses the 35 U.S.C. 101 arguments and amendments. On page 17 of the Applicant’s amendments, Applicant submits that the pending claims are patent eligible under 35 U.S.C. 101.
Applicant argues the claims are similar to claim 3 of Example 48 in the July 2024 Subject Matter Eligibility Examples because the pending claims recite additional elements that integrate the alleged judicial exception into a practical application. The claimed features improve upon conventional techniques, in part, by minimizing user involvement in updating the schedule and processing the selected region of an image, using the specifically trained neural network models. Applicant argues the sequence of features results in a real-world functional change in the state of the electronic apparatus, including modification of stored schedule data and generation of new graphical user interfaces. Applicant submits the ordered combination of at least these features reflect the technical improvement as disclosed.
Examiner respectfully disagrees with Applicant’s arguments. Examiner submits, the arguments describe a sequence of functions to produce output on a user interface so that a user does not confuse schedules.
Applicant is encouraged to describe how the combinations of additional elements capturing the image and how the first neural network model obtains the text. The Applicant should clarify the method or additional elements used to train the neural network models.
Applicant traverses, claim 1 recites the use of a first neural network and a second neural network that are trained on specific data and receive data and output specific data to solve technical problems. Applicant submits the first and second neural models are not claimed in the abstract, but are embedded within a defined device-level workflow that begins with touch input from a user selecting a region of an image displayed on the display through the application screen and ends with displaying, on the display, a user interface providing information on the update schedule information of the user. Applicant submits the claims apply or use any nominal judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technical environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception. Applicant submits the claims Claim 1 as whole integrates the judicial exception into a practical application, and the claims is not directed to the judicial exception.
Examiner respectfully disagrees. Examiner provides an updated analyses based on the amended claims under the 35 U.S.C. 101 rejection. See below.
For clarity, Examiner submits at Step 2A Prong One. the claim recites the abstract idea of capturing information from an image, recognizing text, producing a schedule and identifying sub-schedules, which is a mental concept.
At Step 2A Prong two the claimed additional elements are considered. Integration into a practical application requires an additional element or a combination of additional elements in the claim to apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the exception.
At Step 2A prong two, the first neural network model and the second neural network model are examined for integration into the abstract idea. - No. The claims use the first neural network model “providing” data (a)-(d) and the second neural network model providing boundary. Adding the words apply it – See MPEP 2106.05 (f).
The Applicant’s claims do not describe how the first neural network model is trained. Merely claiming input and output using a neural network model is apply it.- MPEP 2106.05(f) Examiner’s argument for the second neural model is similar to the first neural network model. Particularly, how is the boundary determined by the second neural network model, please clarify how “each of the plurality of pieces” are determined by the second neural network model. Examiner points Applicant to claim 6 -tags. Please clarify “capture” image. The Applicant’s claims are similar to Subject Matter Eligibility Guidance Example 48 claim 2 rather than Example 48 claim 3.
Examiner points Applicant to claim 6 – tokenize the normalized text and the instant application specification p. 12 that discuss OCR and obtained texts to a first neural network model.
The limitations are instructive and do integrate the judicial exception into a practical application. The independent claims do not reflect an improvement in technology. The claims are not integrated into a practical application at Step 2A. – No
In Step 2B, Applicant argues the claims are directed to solving problems related to being able to extract meaningful information from displayed information for updating schedule information without needing for the user to recognize or update the schedule information Applicant explains a user may not be able to grasp the information on the schedules on a glance. The claimed approach improves upon techniques by minimizing user involvement in updating the schedule and processing the selected region of an image, using the specifically trained neural networks models. The claims are similar to Bascom. When viewed as a combination of elements, and amount to significantly more when viewed as a whole. The claimed features provide significantly more.
Examiner respectfully disagrees with Applicant’s Step 2B arguments. As discussed in step 2A prong two, the claims are applying the neural network models to develop a schedule. The improvement is to the scheduling. Scheduling is an abstract concept. Therefore, the improvement is to the abstract concept.
As evinced by the Applicant’s arguments, the invention is a sequence of functions to produce output on a user interface so that a user does not confuse schedules. The Applicant is encouraged to clarify the extracting and the additional elements that are used to extract information and the additional elements that input information into the neural models.
Examiner did not argue BASCOM. Per the USPTO Memorandum dated November 2, 2016 Subject Matter Eligibility Decisions, the BASCOM court agreed that the additional elements were generic computer, network, and Internet components that did not amount to significantly more when considered individually, but explained that the district court erred by failing to recognize that when combined, an inventive concept may be found in the non-conventional and non-generic arrangement of the additional elements, i.e., the installation of a filtering tool at a specific location, remote from the end-users, with customizable filtering features specific to each end user (note that the term "inventive concept" is often used by the courts to describe additional element(s) that amount to significantly more than a judicial exception). Examiner did assert that elements are generic. The Applicant’s claims are not related to filers as at specific location related to internet services. The Applicant’s BASCOM is moot.
Applicant’s arguments are not persuasive. The pending claims are rejected under 35 U.S.C. 101.
Claim Rejections Prior
Because Applicant submitted amendments to the claims 1 and 11, Examiner conducted an updated prior art search. As a result of the updated prior art search, Examiner identified Tanniru (EP 3,882,814 A1) which teaches converting typed and handwritten text and electronic documents into electronic information using intelligent character recognition and optical character recognition. Tanniru’s processing platform may perform natural language processing on the image data. Tanniru’s processing platform may may tokenize words in the textual data to enable analysis of the words. In Tanniru, the processing platform may process the image data, the label data, and data identifying the first set of fields and the second set of fields, with a convolutional neural network model, to identify visual features of the image data. The visual features may be arranged in a grid that forms a sequential representation of the image of the document. Tanniru appears to teach the elements of the Applicant’s specification. However, Tanniru in combinations with the pending art was considered.
The prior art rejection remains withdrawn. In the prior action, Examiner withdrew the rejection under 35 U.S.C. 103 as being unpatentable over any combination of Mason, Hoehne, Sing, Vets, Huang and/or Bhaskar, any other additional reference(s) because it would be improper to teach the claimed invention.
In consideration of the previously withdrawn prior art, Examiner submits, the Applicant’s amendments further narrow the limitations of the claims, and therefore, considering any additional prior such as Tanniru in combination with Mason, Hoehne, Sing, Vets, Huang and/or Bhaskar would be improper.
Examiner submits, the Applicant’s 35 U.S.C. 103 amendments and arguments remain persuasive. The pending claims are allowable.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Examiner submits, the claims 1, 3, 5-11, 13, 15, 18-19 are related to an abstract idea. The claims 1 (and similarly claim 11) disclose the abstract idea of, “…
… receive an input … from a user selecting a region of an image displayed … , wherein only a portion of the displayed image is included in the selected region and the selected region includes information on a plurality of pieces of schedule information, capture an image corresponding to the selected region of the displayed image, in response to receiving the input selecting the region, simultaneously display, …., the captured image and … including a plurality of functions for the selected region, wherein the plurality of functions for the selected region comprise copying, sharing, storing, and adding plans, for the captured image, based on in response to receiving an input … from the user selecting, among the plurality of functions included in the user interface, a function corresponding to a command from the user for adding a schedule using the selected region of the image displayed …, obtain a plurality of texts by performing text recognition of the selected region of the image, input the plurality of obtained texts to … to provide obtain (a) main datetime information identifying a main time period and corresponding to each of the plurality of pieces of schedule information, (b) a plurality of sub-datetime information corresponding to the main datetime information, (c) schedule title information and (d) location information, by causing the plurality of obtained texts to be provided to …, each of the plurality of sub-datetime information corresponding to a portion of a period included in the main time period, input the plurality of obtained texts to … to provide obtain schedule boundary information corresponding to each of the plurality of pieces of schedule information by causing the plurality of obtained texts to be provided to …, the obtained schedule boundary information identifying boundaries between grouped pieces of schedule information among the plurality of pieces of schedule information that are sequentially arranged, identify a plurality of schedule packages based on the obtained main datetime information, the schedule title information, the location information and the schedule boundary information, the schedule packages of the plurality of schedule packages including pieces of schedule information from among the plurality of pieces of schedule information and at least one of the plurality of schedule packages including sub-schedules, update schedule information of the user based on the obtained schedule boundary information and the identified plurality of schedule packages, the obtained main datetime information corresponding to each of the plurality of pieces of schedule information, and the obtained sub-datetime information corresponding to the main datetime information, and display, … providing information on the update schedule information of the user, wherein train … to be trained to output the main datetime information and the sub-datetime information corresponding to the main datetime information based on receiving a plurality of pieces of datetime information, wherein train … to output text having … and train based on a plurality of pieces of text information as input data, the plurality of pieces of text information including a text having the first structure corresponding to boundary information, a text having … not corresponding to the boundary information, and a text having a third structure not corresponding to boundary information, identify that sub-schedules progressing at same location belongs to same schedule package, and identify that sub-schedules having same keyword in the schedule title information corresponding to each sub-schedule belong to same schedule package... ”. Claims 1, 3, 5-11, 13, 15, and 18, 19 in view of the claim limitations, are directed to the abstract idea of capturing information from an image, recognizing text, producing a schedule and identifying sub-schedules, and thus, the claims recite mental processes. Thus, the claims are directed to an abstract idea under the first prong of Step 2A.
The judicial exception are not integrated into a practical application under the second prong of Step 2A. In particular, the claims recite the additional elements beyond the recited abstract idea of ““An electronic apparatus comprising: a display including a touch screen;a memory storing at least one instruction; and at least one processor, comprising processing circuitry, connected to the memory and the display and configured to: execute an application providing an application screen”, “on the touch screen” , “on the display through the application screen”, “a user interface” “a plurality of functions for the selected region, wherein the plurality of functions for the selected region comprise copying, sharing, storing, and adding plans, for the captured image,” “an input on the touch screen from the user” , “a first neural network model for the first neural network model” , “a first neural network model”, “a second neural network model for the second neural network” a first structure”, “a text having a second structure”, in claim 1 (and similarly claim 1); however, when viewed as an ordered combination, and pursuant to the broadest reasonable interpretation, each of the additional elements are computing elements recite adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05 (f).
Regarding the limitation “ capture an image” and “a plurality of functions for the selected region, wherein the plurality of functions for the selected region comprise copying, sharing, storing, and adding plans, for the captured image, these limitations may be construed as computer functions; however, they also could be construed as task a human could complete with a pen and paper and therefore could be abstract. Please clarify.
Accordingly, the additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claims also fail to recite any improvements to another technology or technical field, improvements to the functioning of the computer itself, use of a particular machine, effecting a transformation or reduction of a particular article to a different state or thing, adding unconventional steps that confine the claim to a particular useful application, and/or meaningful limitations beyond generally linking the use of an abstract idea to a particular environment.
At step 2B, it is MPEP 2106.05 (d) – Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information).
Examiner concludes that the additional elements in combination fail to amount to significantly more than the abstract idea based on findings that each element merely performs the same function (s) in combination as each element performs separately. The claim is not patent eligible. Looking at the limitation as an ordered combination adds nothing that is not already present when looking at the elements taken individually.
Dependent claims 3, 5-10, 18, 19 further narrow the abstract idea of independent claim 1. Dependent claims 13 and 15 further narrow claim 11. The claims 1, 3, 5-11, 13, 15, and 18-19 are not patent eligible.
Moreover, aside from the aforementioned additional elements, the remaining elements of dependent claims 3, 5-10, & 13, 15, 18, 19 do not transform the recited abstract idea into a patent eligible invention because these claims merely recite further limitations that provide no more than simply narrowing the recited abstract idea.
Since there are no limitations in these claims that transform the exception into a patent eligible application such that these claims amount to significantly more than the exception itself, claims 1, 3, 5-11, 13, 15, 18, 19 are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to THEA LABOGIN whose telephone number is (571)272-9149. The examiner can normally be reached Monday -Friday, 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Patricia Munson can be reached on 571-270- 5396. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/THEA LABOGIN/ Examiner, Art Unit 3624
/PATRICIA H MUNSON/ Supervisory Patent Examiner, Art Unit 3624