Prosecution Insights
Last updated: April 19, 2026
Application No. 18/713,055

MEMO PROCESSING DEVICE BASED ON AUGMENTED REALITY, SYSTEM, AND METHOD THEREFOR

Non-Final OA §103
Filed
May 23, 2024
Examiner
GE, JIN
Art Unit
2619
Tech Center
2600 — Communications
Assignee
Changwon National University Industry Academy Cooperation Corps
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
98%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
416 granted / 520 resolved
+18.0% vs TC avg
Strong +18% interview lift
Without
With
+18.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
38 currently pending
Career history
558
Total Applications
across all art units

Statute-Specific Performance

§101
9.0%
-31.0% vs TC avg
§103
60.2%
+20.2% vs TC avg
§102
12.0%
-28.0% vs TC avg
§112
11.0%
-29.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 520 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant's claim for foreign priority under 35 U.S.C. 119(a)-(d). The certified copy of Korea patent application number KR10-2021-0186065 filed on 12/23/2021 and KR10-2021-0162617 filed on 11/23/2021 has been received and made of record. Information Disclosure Statement The information disclosure statement (IDS) submitted on 05/23/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Election/Restrictions Claims 9-12 are withdrawn from further consideration pursuant to 37 CFR 1.142(b) as being drawn to a nonelected Species, there being no allowable generic or linking claim. Election was made without traverse in the reply filed on 01/27/2026. Applicant’s election without traverse of claims 1-8 and 13-18 in the reply filed on 01/27/2026 is acknowledged. Claim Objections Claims 6 and 10 are objected to because of the following informalities: “comprises a Move button, and upon execution of the Move button” should be “comprises a move button, and upon execution of the move button” in claim 6. “Google Glass” should be “google glass” in claim 10. Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim fora Combination. - An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts i n support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: Application/Control Number: 15/566,495 Art Unit: 2637 (A) the claim limitation uses the term "means" or "step" or a term used as a substitute for "means" that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term "means" or "step" or the generic placeholder is modified by functional language, typically, but not always linked by the transition word "for" (e.g., "meansfor") or another linking word or phrase, such as "configured to" or "so that"; and (C) the term "means" or "step" or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word "means"(or "step")in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word "means" (or "step") in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word "means" (or "step") are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word "means" (or "step") are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word "means," but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: " a memo acquisition unit configured to specify" and " a memo presentation unit configured to display " in claim 1; and " a captioning unit configured to perform " and “a search unit configured to search “ in claim 2. " a memo list composition part configured to compose " and “an overlay screen processing part configured to display “ in claim 5. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 7-8, and 13-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. PGPubs 2018/0330204 to Kim et al. in view of U.S. PGPubs 2019/0124213 to Horvath et al.. Regarding claim 1, Kim et al. teach a memo processing device based on augmented reality (abstract), comprising: a camera configured to film a video (par 0065, par 0087, “The camera 490 may capture object(s) in front or behind the electronic device 401 and output the captured image(s) of the object(s). …each camera module may capture still images or video under the control of the processor 410 and output the captured still images or video to the processor 410 or memory 430. The processor 410 may store the captured still images or video in the memory 430 or display them on the display 460”, par 0169-0172, “The electronic device may obtain an image for an object using a camera (e.g., the camera module 291 or camera 490) functionally connected with the electronic device”); a memo acquisition unit configured to specify a subject to which a memo is to be attached in the filmed video and store the subject and the memo in a memory (par 0089, “At least part of the ontology database 411 and the metadata model database 413 may be integrated with the image/metadata database 412. The databases 411 to 414 may be parts of at least one database. The ontology and metadata model may be regarded as metadata or part of metadata”, par 0096, “The image/metadata database 412 may include a plurality of images and a plurality of metadata each corresponding to a respective one of the plurality of images. The plurality of metadata may be stored in the form of a database with a plurality of data records. Each of the plurality of images may be a still image file or video file”, par 0099-0100, par 0112-0114, “The processor 410 may integrate the recognition information with the image-related information based on the ontology database 411 and/or metadata model database 413. The processor 410 may store the integrated information, as metadata of the image, in the image/metadata database 412, the ontology database 411, and/or metadata model database 413 or may provide services or functions using the metadata”, par 0175-0176, “the electronic device may integrate recognition information about the image with the image-related information. In one embodiment, the electronic device may incorporate the recognition information about the image with the image-related information based on a first database (e.g., at least one of the ontology database 411 or the metadata model database 413) defining a plurality of information/data elements and relations among the plurality of information/data elements”, par 0186-0188, “the electronic device may store the integrated information, as metadata of the image, in the memory, a third database (e.g., the image/metadata database 412), or the first database (e.g., at least one of the ontology database 411 or the metadata model database 413).”). a memo presentation unit configured to display the memo stored in the memory (par 0258-0259, “the electronic device may display the information/data elements of the first group on a first area (or first screen area or first window) of the display. The plurality of information/data elements may be related to a plurality of metadata for the plurality of images stored in the electronic device (or the memory 130, 230, or 430) or the third database (e.g., the image/metadata database 412)“, par 0274-0275, “the electronic device may display the information/data elements 1031 to 1034 of the first group on a first area (or first screen area or first window) of the display. The plurality of information/data elements may be related to a plurality of metadata for the plurality of images stored in the electronic device (or the memory 130, 230, or 430) or the third database (e.g., the image/metadata database 412).”). But Kim et al. keep silent for teaching a memo presentation unit configured to display the memo in an overlay form when the camera faces the subject. In related endeavor, Horvath et al. teach a memo presentation unit configured to display the memo in an overlay form when the camera faces the subject (par 0053, “the data conversion module 516 may convert the extracted data entry items into user interface items that can be suitably presented as annotations on the display screen. In operation 640, the annotations are displayed as overlays on a live view image of the cheque on the display screen. In particular, the UI manager module 520 takes camera input (i.e. live preview of scene captured by camera) and the annotations generated from the extracted data entry items to produce display output. Thus, the displayed output contains a live view image of the cheque as well as one or more annotation overlays positioned over the live view image.”) It would have been obvious to a person of ordinary skill in the art at the time before the effective filing data of the claimed invention to modified Kim et al. to include a memo presentation unit configured to display the memo in an overlay form when the camera faces the subject as taught by Horvath et al. to provide annotation overlays containing a respective one of the one or more data entry items and being displayed in association with its respective data field to provide user interfaces that facilitate effective capture and processing of images of documents on electronic devices. Regarding claim 7, Kim et al. as modified by Horvath et al. teach all the limitation of claim 1, and Kim et al. further teach wherein the memory is an internal memory provided in the mobile device (par 0037) or an external memory removably attached to the mobile device (Fig 2, par 0057, “The memory 230 (e.g., the memory 130) may include, e.g., an internal memory 232 or an external memory 234 … The external memory 234 may include a flash drive, e.g., a compact flash (CF) memory, a secure digital (SD) memory, a micro-SD memory, a mini-SD memory, an extreme digital (xD) memory, a multi-media card (MMC), or a memory Stick™”, par 0086-0089, “The memory 430 may include an ontology database 411, an image/metadata database 412, a metadata model database 413, and a target database 414. The image/metadata database 412, the ontology database 411, and the metadata model database 413 may be parts of one database”). Regarding claim 8, Kim et al. as modified by Horvath et al. teach all the limitation of claim 1, and Kim et al. further teach wherein the mobile device includes a smartphone, Google Glass, and a head-worn display (par 0032, “examples of the electronic device according to various embodiments may include at least one of a smartphone, a tablet personal computer (PC), a mobile phone, a video phone, an e-book reader, a desktop PC, a laptop computer, a netbook computer, a workstation, a server, a personal digital assistant (PDA), a portable multimedia player (PMP), a MP3 player, a mobile medical device, a camera, or a wearable device. According to various embodiments, the wearable device may include at least one of an accessory-type device (e.g., a watch, a ring, a bracelet, an anklet, a necklace, glasses, contact lenses, or a head-mounted device (HMD)), a fabric- or clothes-integrated device (e.g., electronic clothes), a body attaching-type device (e.g., a skin pad or tattoo), or a body implantable device (e.g., an implantable circuit)”). Regarding claims 13-14, the method claims 13-14 are similar in scope to claims 1 and 1+7 and are rejected under the same rational. Claim(s) 2 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. PGPubs 2018/0330204 to Kim et al. in view of U.S. PGPubs 2019/0124213 to Horvath et al., further in view of U.S. PGPubs 2017/0185236 to Yang et al. Regarding claim 2, Kim et al. as modified by Horvath et al. teach all the limitation of claim 1, and Horvath et al. further teach further comprising: a captioning unit configured to perform image captioning of a video of the subject to which the memo is attached (par 0051-0052, “ The OCR engine 514 is capable of converting images of typed, handwritten, or printed text into digital format, such as machine-encoded text. The OCR engine 514 detects an image representation of a data entry item in a particular data field region and converts the image representation into text format. In this way, the text associated with the data entry items represented in the acquired image of the cheque can be extracted”); but keep silent for teaching a search unit configured to search content of the memo or captioned images. In related endeavor, Yang et al. teach a search unit configured to search content of the memo or captioned images (par 0003, “the image chat application can search through a repository of stored images to identify a stored image that is similar to the user image. The image chat application can further generate and submit a comment to the user image based on a comment that is paired with the similar image”, par 0024, “the image chat application may use the image-to-image comparison technique 202 to compare a user image 206 with a dataset of stored images 214, and select a similar image from the dataset that possesses comparably similar features to the user image 206. In doing so, the image chat application may identify a comment 204 associated with the similar image from the dataset of stored images 214, and submit the identified comment 204 as a response to the user image 206 “, par 0027, “In some examples, the image-to-image comparison technique 202 may focus on identifying similar images that compare wholly with a user image 206. In other examples, the image-to-image comparison technique 202 may instead focus only on one dominant object of the user image 206. For example, a client device 210 may submit data indicating a user image 206 of a “cat on a beach.” In response, the image-to-image comparison technique 202 may identify one or more comments associated with similar images of the “cat,” and may return a comment 204, such as “that's a very happy cat.””, par 0030-0031, “in response to identifying a dominant object 314 of a user image 306, the image-to-tag comparison technique 302 may compare the dominant object 314 of the user image 306 with a dataset of tagged images 316. In doing so, the image chat application may select an image from the dataset of tagged images 316, as a similar image 318, which possesses comparably similar features to the dominant object 314 of the user image 306. Further, a tag 320 that is associated with the similar image 318 may be identified. In response to identifying the one or more tags 320 associated with the similar image 318, the image-to-tag comparison technique may further identify one or more comments associated with the tag 320. In this instance, the image chat application may randomly select a comment 304 from the plurality of comments to direct towards the user image 306”). It would have been obvious to a person of ordinary skill in the art at the time before the effective filing data of the claimed invention to modified Kim et al. as modified by Horvath et al. to include a search unit configured to search content of the memo or captioned images as taught by Yang et al. to generate and submit a comment to the user image based on a comment that is paired with the similar image to further engage the user's attention by mimicking a social interaction. Regarding claim 15, the method claim 15 is similar in scope to claim 2 and is rejected under the same rational. Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. PGPubs 2018/0330204 to Kim et al. in view of U.S. PGPubs 2019/0124213 to Horvath et al., further in view of U.S. PGPubs 2017/0236037 to Rhoads et al.. Regarding claim 3, Kim et al. as modified by Horvath et al. teach all the limitation of claim 1, but keep silent for teaching wherein the memo acquisition unit specifies a subject using a keypoint extraction algorithm. In related endeavor, Rhoads et al. teaches wherein the memo acquisition unit specifies a subject using a keypoint extraction algorithm (par 0095, “ Note that this entails processing each view image in accordance with one or more suitable feature extraction algorithms (e.g., color histogram, FAST (Features from Accelerated Segment Test), SIFT, PCA-SIFT (Principal Component Analysis-SIFT), F-SIFT (fast-SIFT), SURF, ORB, etc.) to generate one or more reference image features. Generally, view images are generated for each object represented by an object signature in the signature database “, par 0097, “as an input, query data representing an image depicting an oblique view of an object-of-interest (e.g., a Wheaties box), with one or more feature sets extracted therefrom (e.g., using one or more of feature extraction algorithms of the likes noted above) to generate one or more query image features. In one embodiment, the feature extraction algorithm(s) may be applied roughly around the sampled object to coarsely “frame” the object and thus begin a marginal amount of noise reduction due to non-object image data. A preliminary matching process is then performed by querying the fast-search database to identify reference image features that are sufficiently similar to the query image feature(s)”). It would have been obvious to a person of ordinary skill in the art at the time before the effective filing data of the claimed invention to modified Kim et al. as modified by Horvath et al. to include wherein the memo acquisition unit specifies a subject using a keypoint extraction algorithm as taught by Rhoads et al. to optimize extremely fast search and initial matching to perform an object recognition process in a computationally- and time-efficient manner. Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. PGPubs 2018/0330204 to Kim et al. in view of U.S. PGPubs 2019/0124213 to Horvath et al., further in view of U.S. PGPubs 2006/0246957 to Lim. Regarding claim 5, Kim et al. as modified by Horvath et al. teach all the limitation of claim 1, and Horvath et al. further teach wherein the memo presentation unit comprises: an overlay screen processing part configured to display the memo as an overlay (par 0051-0052, “ The OCR engine 514 is capable of converting images of typed, handwritten, or printed text into digital format, such as machine-encoded text. The OCR engine 514 detects an image representation of a data entry item in a particular data field region and converts the image representation into text format. In this way, the text associated with the data entry items represented in the acquired image of the cheque can be extracted”); but keep silent for teaching wherein the memo presentation unit comprises: a memo list composition part configured to compose a list of memos. In related endeavor, Lim teaches wherein the memo presentation unit comprises: a memo list composition part configured to compose a list of memos (par 0017, par 0047, par 0065, “when the memo function execution screen displayed is the memo list screen, the user can select a memo he intends to check from the list and then checks the memo contents therein. When the memo function execution screen displayed is the memo contents input screen, the user can compose a memo”). It would have been obvious to a person of ordinary skill in the art at the time before the effective filing data of the claimed invention to modified Kim et al. as modified by Horvath et al. to include wherein the memo presentation unit comprises: a memo list composition part configured to compose a list of memos as taught by Lim to optimize extremely fast search and initial matching to perform an object recognition process in a computationally- and time-efficient manner. Claim(s) 6 and 16-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. PGPubs 2018/0330204 to Kim et al. in view of U.S. PGPubs 2019/0124213 to Horvath et al., further in view of U.S. PGPubs 2019/0266263 to Jiang et al. Regarding claim 6, Kim et al. as modified by Horvath et al. teach all the limitation of claim 1, but keep silent for teaching wherein the memo comprises a Move button, and upon execution of the Move button, the memo is attached to a video of a newly filmed subject. In related endeavor, Jiang et al. teach wherein the memo comprises a Move button, and upon execution of the Move button, the memo is attached to a video of a newly filmed subject (par 0007, “a first image of the object; processing, at the first mobile device, the first image to identify at least a first image attribute, the first image attribute including an image feature or an image object; accessing, by the first mobile device, a database of registered objects, each registered object being identified by a linked image and being associated with metadata describing the object; retrieving from the database of registered objects a first registered object having a linked image matching the first image attribute; in response to the retrieving, providing, at the first mobile device, a user interface configured to interact with the first registered object; receiving, at the first mobile device using the user interface, a digital content; assigning, at the first mobile device using the user interface, the digital content to the first registered object; designating, at the first mobile device using the user interface, one or more recipients of the digital content; and storing linking data for the first registered object in the database of registered objects, the linking data associating the digital content to the first registered object”, par 0030, “The advanced user interaction system can be implemented to allow services to be attached to an object”, par 0061, “the advanced user interaction system may be deployed to enable digital content exchange between users. The object retrieval method is executed when the user wants to attach or receive digital content after the on-boarding process”, par 0067, “with the object retrieved, the method 100 provides a user interface designated for interacting with the retrieved object. In one embodiment, the user interface may be provided on the display of the mobile device. In the present embodiment, the user interface may provide an option to attach a digital content to the retrieved object (step 112). The user interface may also provide an option to receive digital content that may be attached to the retrieved object (step 114)”, par 0073-0075, “in response to user input, the method 112 assigns the digital content to the retrieved object ….The content link can be used to retrieve, update, modify and delete the digital content. In the event that the object is retrieved later by a user to receive the digital content, the App on the mobile device accesses the data structure of the object and then obtains the content link attached to the object ….multiple digital content may be attached to an object and multiple links may be attached to the metadata or data structure of the object” ….attached metadata (related digital content) to object in the image). It would have been obvious to a person of ordinary skill in the art at the time before the effective filing data of the claimed invention to modified Kim et al. as modified by Horvath et al. to include wherein the memo comprises a Move button, and upon execution of the Move button, the memo is attached to a video of a newly filmed subject as taught by Jiang et al. to providing a user interface configured to interact with object of an image to attach a digital content through retrieving from the database of registered objects having a linked image matching attribute of the image to implement to allow services to be attached to an object. Regarding claim 16, Kim et al. as modified by Horvath et al. teach all the limitation of claim 13, but keep silent for teaching further comprising: moving and attaching the memo attached to the designated subject to another subject. In related endeavor, Jiang et al. teach further comprising: moving and attaching the memo attached to the designated subject to another subject (par 0023, “an advanced user interaction system and method implemented in a mobile device processes an image to identify an object and enables a digital content to be attached to the object or to be received from the object. The advanced user interaction system and method enables the digital content to be designated for specific recipients or for all recipients”, par 0038, “The advanced user interaction system of the present disclosure is implemented by a user first on-boarding an object and then retrieving the on-boarded object, by the same user or by a different user. Digital content can be attached to and received from the object”, par 0041, “With object 14 retrieved, User 2 leaves a message and attaches the message to object 14. User 2 also sets up the permission level that only User 3 can receive the message. At a later time, User 3 retrieves the object 14 by scanning the bowl with the App using her mobile device 12. User 3 then receives the message that User 2 has created”, par 0046, “ With object 25 retrieved, User 1 leaves a message and attaches the message to object 25. User 1 also sets up the permission level that User 3 can receive the message. At another time, User 2 retrieves the object 25 by scanning the present with the App using her mobile device 12. With object 25 retrieved, User 2 leaves a message and attaches the message to object 25. “, par 0067, “with the object retrieved, the method 100 provides a user interface designated for interacting with the retrieved object. In one embodiment, the user interface may be provided on the display of the mobile device. In the present embodiment, the user interface may provide an option to attach a digital content to the retrieved object (step 112). The user interface may also provide an option to receive digital content that may be attached to the retrieved object (step 114)”, par 0073-0075, “in response to user input, the method 112 assigns the digital content to the retrieved object…..the method 112 stores, in the database of registered objects, in the metadata of the retrieved object or the data structure of the retrieved object a content link to a location in the content server at which the digital content is stored. The content link can be used to retrieve, update, modify and delete the digital content. In the event that the object is retrieved later by a user to receive the digital content, the App on the mobile device accesses the data structure of the object and then obtains the content link attached to the object. ….multiple digital content may be attached to an object and multiple links may be attached to the metadata or data structure of the object” …provide user interface to link digital content or metadata to different objects). It would have been obvious to a person of ordinary skill in the art at the time before the effective filing data of the claimed invention to modified Kim et al. as modified by Horvath et al. to include further comprising: moving and attaching the memo attached to the designated subject to another subject as taught by Jiang et al. to providing a user interface configured to interact with object of an image to attach a digital content through retrieving from the database of registered objects having a linked image matching attribute of the image to implement to allow services to be attached to an object. Regarding claim 17, Kim et al. as modified by Horvath et al. teach all the limitation of claim 13, but keep silent for teaching wherein a subject to which the memo is to be attached is specified using a keypoint extraction algorithm, and the memo is displayed on a screen using a keypoint matching algorithm. In related endeavor, Jiang et al. teach wherein a subject to which the memo is to be attached is specified using a keypoint extraction algorithm, and the memo is displayed on a screen using a keypoint matching algorithm (par 0007, “the first image to identify at least a first image attribute, the first image attribute including an image feature or an image object; accessing, by the first mobile device, a database of registered objects, each registered object being identified by a linked image and being associated with metadata describing the object; retrieving from the database of registered objects a first registered object having a linked image matching the first image attribute; in response to the retrieving, providing, at the first mobile device, a user interface configured to interact with the first registered object; receiving, at the first mobile device using the user interface, a digital content; assigning, at the first mobile device using the user interface, the digital content to the first registered object; designating, at the first mobile device using the user interface, one or more recipients of the digital content; and storing linking data for the first registered object in the database of registered objects, the linking data associating the digital content to the first registered object”, par 0051-0053, “image features refer to derived values that are informative and descriptive of the image, such as edges and contours in the digital image. In the present description, image objects refer to instances of semantic objects in the digital image, such as a vase, a lamp or a human. In the following description, the term “image attributes” is sometimes used to refer collectively to image features or image object of an image. At step 56, the method 50 registers the object and stores the object in the database of registered objects. In the present description, registering an object refers to adding or entering the device into the advanced user interaction system of the present disclosure using information provided about the object. In particular, the object is stored in the database identified by the image features or recognized image object as the linked image. The object may also be stored with associated metadata in the database. For example, the metadata may include the name or an identifier of the object”, par 0065-0067, “the method 100 processes the image to identify the image attributes of the image, that is, to identify the image features and/or a recognized image object in the image …. the method 100 accesses the database of registered objects. At 108, the method 100 retrieves an object with the matching linked image. For example, the method 100 may compare the extracted image features or recognized image object with linked images in the database …. the user interface may provide an option to attach a digital content to the retrieved object (step 112). The user interface may also provide an option to receive digital content that may be attached to the retrieved object (step 114)”, par 0066-0067, “the method 100 accesses the database of registered objects. At 108, the method 100 retrieves an object with the matching linked image. For example, the method 100 may compare the extracted image features or recognized image object with linked images in the database …. with the object retrieved, the method 100 provides a user interface designated for interacting with the retrieved object. In one embodiment, the user interface may be provided on the display of the mobile device”). It would have been obvious to a person of ordinary skill in the art at the time before the effective filing data of the claimed invention to modified Kim et al. as modified by Horvath et al. to include wherein a subject to which the memo is to be attached is specified using a keypoint extraction algorithm, and the memo is displayed on a screen using a keypoint matching algorithm as taught by Jiang et al. to provide match processing through attribute of the image to find and link best match object and associate metadata to implement to allow services to be attached to an object. Allowable Subject Matter Claims 4 and 18 are objected to as being dependent upon a rejected base, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: The cited prior art fails to teach the combination of elements recited in claim 4, including " wherein the memo presentation unit uses a keypoint matching algorithm, and calculates a homography if a degree of matching between a keypoint in a camera frame and a keypoint in one of candidate memos is greater than or equal to a threshold value when the camera faces the subject, and then displays an image rendered of the memo as an overlay on a screen". The following is a statement of reasons for the indication of allowable subject matter: The cited prior art fails to teach the combination of elements recited in claim 18, including " wherein the displaying of the memo on the screen comprises: matching a keypoint in a camera frame with a keypoint in one of candidate memos when the camera of the mobile device faces the subject; determining if a degree of the matching is greater than or equal to a predetermined threshold; calculating, if the degree is greater than or equal to the predetermined threshold, a homography between the keypoint in the camera frame and the keypoint in one of the candidate memos; and displaying an image rendered of the memo as an overlay on a screen according to a result of the calculation". Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jin Ge whose telephone number is (571)272-5556. The examiner can normally be reached 8:00 to 5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Chan can be reached at (571)272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. JIN . GE Examiner Art Unit 2619 /JIN GE/ Primary Examiner, Art Unit 2619
Read full office action

Prosecution Timeline

May 23, 2024
Application Filed
Mar 09, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592024
QUANTIFICATION OF SENSOR COVERAGE USING SYNTHETIC MODELING AND USES OF THE QUANTIFICATION
2y 5m to grant Granted Mar 31, 2026
Patent 12586296
METHODS AND PROCESSORS FOR RENDERING A 3D OBJECT USING MULTI-CAMERA IMAGE INPUTS
2y 5m to grant Granted Mar 24, 2026
Patent 12579704
VIDEO GENERATION METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12573164
DESIGN DEVICE, PRODUCTION METHOD, AND STORAGE MEDIUM STORING DESIGN PROGRAM
2y 5m to grant Granted Mar 10, 2026
Patent 12573151
PERSONALIZED DEFORMABLE MESH BY FINETUNING ON PERSONALIZED TEXTURE
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
98%
With Interview (+18.0%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 520 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month