DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
2. This is in response to the applicant response filed on 09/08/2025. In the applicant’s response, claims 1-11 were cancelled and claims 12 -15 were newly added was amended. Accordingly, claims 12-15 are pending and being examined. Claim 12 is a solo independent form.
Claim Rejections - 35 USC § 112
3. The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
4. (New Matter or Non-Disclosure) Claims 12-15 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for pre-AIA the inventor(s), at the time the application was filed, had possession of the claimed invention.
4-1. Regarding independent claim 12, the claim recites “[a] computerized method for contextual smart computer vision comprising: with a digital camera of a smartphone, obtaining a digital image of a computer screen of a laptop computer different from the smartphone” in lines 1-3. However, support for this is not found throughout the applicant’s originally filed specification. Rather, both “a laptop computer” and “a smart phone” recited in the specification are interchangeable. Specification, see paragraph [0025], states:
“It is noted that in other example embodiments, another type of mobile device (e.g. a tablet computer, etc.) can be utilized in lieu of a smart phone.”
It is apparent that a mobile device recited by the specification includes a smart phone and a tablet (i.e., laptop) computer while a tablet (or laptop) computer recited by the specification can be replaced by a smart phone in terms of implementing the claimed method. Even in the real world, there is no difference at all between a laptop computer and a smart phone in messaging application implementation of the claimed method, namely, both are interchangeable.
Secondly, the limitation of “a laptop computer [is] different from the smartphone” is a negative limitation. According to MPEP 2173.05(i), “Any negative limitation or exclusionary proviso must have basis in the original disclosure. If alternative elements are positive recited in the specification, they may be explicitly excluded in the claims.” “The mere absence of a positive recitation is not basis for an exclusion... Any claim containing a negative limitation which does not have basis in the original disclosure should be rejected under 35 U.S.C 112 (a), as failing to comply with the written description requirement.”
For the rationales set forth above, independent claim 21 is rejected as being indefinite under 35 U.S.C. 112(a).
4-2. The remaining claims 13-15 are dependent from claim 12, respectively, therefore, are rejected as being indefinite under 35 U.S.C. 112(a).
Claim Rejections - 35 USC § 103
5. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
6. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
7. Claims 12-15 are rejected under 35 U.S.C. 103 as being unpatentable over Choi (US 2019/0042079, hereinafter “Choi”) in view of Sharifi et al (US 9,582, 482, hereinafter “Sharifi”).
Regarding claim 12, Choi discloses a computerized method for contextual smart computer vision (the method of providing a search result in an electronic device; see Abstract) comprising:
with a digital camera of a smartphone, obtaining a digital image of a computer screen of see 1701-1702 of fig.17, and para.270: “the electronic device A may capture the screen including the object and generate a captured image (in operation 1703)”), wherein the computer screen is displaying a specified computing application (see, e.g., fig.1(a) and para.67: “the screen including the object 111 may be an application execution screen.”);
with a machine learning algorithm (for “object recognition” described in step 1705 of fig.17 and para.273: “the object information may be the information which is acquired by applying the trained recognition model set to estimate the object area as the object information”; see para.292: “The training component 1910 may generate a recognition model having a determination criterion using collected learning data.”):
obtaining a set of training digital images of computer screens and associated contextual actions (see para.292: “The training component 1910 may generate a recognition model having a determination criterion using collected learning data”; wherein the learning data include the object image/information and the context information described in para.304: “the data learning acquisition unit 1910-1 may acquire at least one of the entire image including the object, the image corresponding to the object area, the object information, and the context information, as learning data.”), and
using the machine learning algorithm to build and train a machine learning classifier based on the set of training digital images of computer screens and associated contextual actions (see para. 293: “the training component 1910 [i.e., the processor 1900 shown in fig.19A] may generate, train, or renew an object recognition model having a criterion for determining which object is included in an image using the image including the object, as learning data.”);
using the machine learning classifier executed in a hybrid fashion with a mixture of locally executing machine learning models and cloud-based models (see the network system for executing a search application shown by fig.2B, which consists of the electronic device A, the recommendation device B, the object recognition device C, the user characteristic recognition device D, and the data collection device E as cloud; see para.108) to classify the digital image and determine a specific application action based on the classification of the digital image (see para.298: “the detector 1920 [i.e., the processor 1900 shown in fig.19A] may estimate (or determine, infer) a search category for providing a search result by applying at least one of the object information and the context information to the trained recognition model.” As shown in fig.1(c), wherein the search result 131 provided by the trained recognition model includes the specific application (hotel search) action—“AAA HOTEL”.);
enabling live context sharing between the smartphone and the laptop computer (see para.102: “the communicator 150 may transmit a captured image to an external sever, or transmit information regarding an object area and context information (e.g., peripheral information of the object, etc.). “); and based on context, the mobile application suggests specified contextual actions including sharing via WhatsApp (see para.73: “The electronic device A may acquire a search result related to the object 111 using information regarding the object 111 and context information 121, which is acquired according to the selection of the object 111.”) and
As explained above, the mere difference between the method of Choi and the method of the claimed invention is that, Choi does not explicitly disclose “[1] obtaining a digital image of a computer screen of a laptop computer different from the smartphone” and “[2] adding an event to a personal calendar” as recited in the claim. However, as to the feature [1], it would have been obvious for one of ordinary skill in the art to appreciate that a screen image displayed on a screen of a laptop computer can be captured by either the laptop computer self or another mobile device such as a smartphone and share between a laptop computer and a smartphone other since this function has been well-known and widely used in any laptop computer and/or a smartphone. As to feature [2], in the same field of endeavor, that is, in the field of providing insight for entities in mobile onscreen content, Sharifi teaches a method for generating annotation data for actionable content displayed on a mobile computing device, see fig.8; wherein the method include “opening a contacts mobile application for an email address or phone number, initiating a phone call for a phone number, sending an email to an email address, adding an event or reminder in a calendar for a date, etc”, see col.22, lines 49-59. It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention was made to incorporate the teachings of Sharifi into the teachings of Choi and add an event or reminder in a calendar for a date taught by Sharifi. Suggestion or motivation for doing so would have been to “provide a consistent user experience across mobile applications, so that similar type of actionable content behaves the same across applications” and “allow a user of a mobile device to share a screen with another user or to transfer the state of one mobile device to another mobile device” as taught by Sharifi, see col.3, lines 25-42. Therefore, the claim is unpatentable over Choi in view of Sharifi.
Regarding claim 13, the combination of Choi and Sharifi discloses the method of claim 1, wherein the computing system comprises a laptop computer and the mobile device comprises a smartphone (e.g., see Sharifi, mobile device 170 and mobile device 130 shown by fig.3; see col.12, lines 37-41: “Mobile device 170 may be any mobile personal computing device, such as a smartphone or other handheld computing device, a tablet, a wearable computing device, etc.,”).
Regarding claim 14, the combination of Choi and Sharifi discloses the method of claim 1, wherein the specified contextual actions include sharing via WhatsApp (WhatsApp has been launched in February 2009 and is widely used as one of the most popular messaging applications. It would be obvious one of ordinary skill in the art.).
Regarding claim 15, the combination of Choi and Sharifi discloses the method of claim 1, wherein the machine learning classifier is executed in a hybrid fashion with a mixture of locally executing machine learning models and cloud-based models (Choi, see the network system for executing a search application shown by fig.2B, which consists of the electronic device A, the recommendation device B, the object recognition device C, the user characteristic recognition device D, and the data collection device E as cloud; see para.108)).
Response to Arguments
8. Applicant's arguments received on 09/08/2025 have been considered but are moot in view of the new ground(s) of rejection.
Conclusion
9. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
10. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RUIPING LI whose telephone number is (571)270-3376. The examiner can normally be reached 8:30am--5:30pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, HENOK SHIFERAW can be reached on (571)272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit https://patentcenter.uspto.gov; https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center, and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RUIPING LI/Primary Examiner, Ph.D., Art Unit 2676