Prosecution Insights
Last updated: April 19, 2026
Application No. 17/126,746

SYSTEM AND METHOD FOR LOCATING PRODUCTS

Non-Final OA §103
Filed
Dec 18, 2020
Examiner
KANG, TIMOTHY J
Art Unit
3689
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Referboard Marketing Pty Ltd.
OA Round
5 (Non-Final)
46%
Grant Probability
Moderate
5-6
OA Rounds
3y 1m
To Grant
72%
With Interview

Examiner Intelligence

Grants 46% of resolved cases
46%
Career Allow Rate
129 granted / 280 resolved
-5.9% vs TC avg
Strong +26% interview lift
Without
With
+26.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
49 currently pending
Career history
329
Total Applications
across all art units

Statute-Specific Performance

§101
45.8%
+5.8% vs TC avg
§103
37.1%
-2.9% vs TC avg
§102
6.3%
-33.7% vs TC avg
§112
5.8%
-34.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 280 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/1/2026 has been entered. Status of Claims Claims 1-2, 4-7, 9-11, 14-16, 18-19, and 21-26 remain pending, and are rejected. Claims 3, 8, 12-13, 17, and 20 have been cancelled. Response to Arguments Applicant’s arguments filed on 1/1/2026 with respect to the rejection under 35 U.S.C. 103 have been fully considered, but are moot in light of new grounds of rejection. Applicant’s amendments necessitated new grounds of rejection. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 4-6, 14-15, 18-19, 21, and 24 are rejected under 35 U.S.C. 103 as being unpatentable by Yoshii (US 20190325497 A1) in view of Cordova-Diba (US 20160042251 A1), and in further view of Grossman (US 20180197223 A1). Regarding Claim 1: Yoshii discloses a system comprising: a plurality of user devices; (Yoshii: [0048] – “The information system A includes one or at least two terminal apparatuses 1 and a server apparatus 2. Each terminal apparatus 1 is, for example, a smartphone, a tablet device, a so-called personal computer, a mobile phone, or the like. The terminal apparatus 1 preferably has a photograph function. There is no limitation on the type of the terminal apparatus”). a product server in communication over a network with the plurality of user devices; (Yoshii: [0048] – “The terminal apparatus 1 preferably has a photograph function. There is no limitation on the type of the terminal apparatus 1. The server apparatus 2 is a so-called cloud server or the like, and there is no limitation on the type thereof”; Yoshii: [0052] – “In the terminal storage unit 11 constituting the terminal apparatus 1, various types of information is stored. The various types of information is, for example, an image containing one or at least two objects. The image containing an object is an image in which the object appears. The image containing an object is, for example, a photo or the like in which the object appears. The image is typically a taken photo. The image is an image that is transmitted to the server apparatus 2”). a product database storing product data corresponding to a plurality of products; (Yoshii: [0065] – “In the storage unit 20 constituting the server apparatus 2, various types of information can be stored. The various types of information is, for example, one or more pieces of product information, one or more pieces of user information, or the like. The product information is information related to a product or a service (hereinafter, referred to as products, etc.). The product information has, for example, a product identifier for identifying products, etc., and images, prices, features, attribute values (e.g., colors, sizes, shapes, etc.) or the like of products, etc.”). wherein each user device is configured to: receive a user input to share an image with the product server; (Yoshii: [0102] – “A terminal processing unit (not shown) of the terminal apparatus 1 determines whether or not to transmit the image. If it is determined to transmit an image, the procedure advances to step S305, and, if not, the procedure returns to step S301. The terminal processing unit checks the flag stored in the terminal storage unit 11. If the flag is information indicating that an image is to be transmitted, the terminal processing unit transmits the image”). in response to the receiving the image data from one of the plurality of user devices, retrieve the image from an image source based on the received image data; (Yoshii: [0106] – “The terminal receiving unit 14 determines whether or not one or more pieces of object information have been received from the server apparatus”; Yoshii: [0062] – “The terminal output unit 15 outputs an image and one or more object identifiers. There is no limitation on the output mode of an image and one or more object identifiers. The terminal output unit 15 preferably outputs one or more object identifiers in an image”). transmit the data corresponding to the at least one product to the one of the plurality of user devices in order to enable the one of the plurality of user devices to display information relating to the at least one product. (Yoshii: [0134] – “The presentation information acquiring unit 222 acquires presentation information, using the one or more object identifiers acquired in step S406 and the like. The presentation information acquiring unit 222 acquires presentation information, for example, using the “coordinate” algorithm or the “I want it!” algorithm; Yoshii: [0135] – “The presentation information transmitting unit 232 transmits the acquired presentation information to the terminal apparatus 1”). Yoshii does not explicitly teach a system comprising: wherein each user device is configured to: extract image data corresponding to the image from an image data source; transmit the extracted image data to the product server; wherein the product server is configured to: reformat the retrieved image into a numerical metric array; scan the numerical metric array to identify and extract image features to identify one or more items with each item having a boundary box to thereby define one or more image sections in the retrieved image and a portion of the retrieved image not containing the one or more image sections, each image section associated with a particular one of the one or more identified items; determine one or more characteristics associated with the one or more identified items by analyzing the received image, said one or more characteristics determined, at least in part, by analyzing the portion of the image not containing the one or more image sections of the one or more identified items; convert the one or more image sections into one or more embedding vector representations, such that there is an embedding vector representation for each identified item; match the retrieved image with at least one of the plurality of products based on comparison of the one or more embedding vectors associated with the one or more identified items and the one or more characteristics determined at least in part from the portion of the image not containing the one or more image sections of the one or more identified items with the product data stored in the product database; Notably, however, Yoshii doers disclose acquiring presentation information using the object identifiers that is the same or similar (Yoshii: [0133]). To that accord, Cordova-Diba does teach a system comprising: wherein each user device is configured to: extract image data corresponding to the image from an image data source; (Cordova-Diba: [0095] - “Network device 120 generates a visual query from an input image, and sends the visual query to media analysis server 140. Media analysis server 140 receives the visual query from network device 120. Features are extracted or derived from visual attributes pertinent to the input image at one or both of the network device 120 and media analysis server 140. These features are then used by media analysis server 140 to locate and identify objects in images and generate interactive content from the original content. The interactive content may be made available to network device 120 by a content server”). transmit the extracted image data to the product server; (Cordova-Diba: [0113] – “the query image (e.g., comprising a region or entire image of interest) is transmitted, in a visual query, from the client application (e.g., comprising interactive module or application 210) to object localization server 360 using conventional digital wired network and/or wireless network means”). reformat the retrieved image into a numerical metric array; (Cordova-Diba: [0135 – “a process for tagging images is performed in step 745. FIG. 8 depicts a flowchart for this process of tagging images in step 745, according to an embodiment. The process is initiated in step 810. In step 815, the normalized query image, resulting from steps 720 and 725 in the process depicted in FIG. 7, is loaded into the system. In step 820, this normalized query image is segmented (e.g., by image segmentation module 314) into perceptually homogeneous segments by a graph-based segmentation algorithm (e.g., graph-cut)”). scan the numerical metric array to identify and extract image features to identify one or more items with each item having a boundary box to thereby define one or more image sections in the retrieved image; (Cordova-Diba: [0138] – “In step 830, potential objects are detected (e.g., by object candidate generation module 320) in the query image by analyzing the regions within each segment (e.g., produced by image segmentation module 314) to calculate contour characteristics (e.g., maximum convexity defect, moments, areas, mass center, relative area, compactness, solidity, aspect ratio, etc.) and use these characteristics to determine if the current segmented region meets the requirements of a potential object of interest… The orientation of the object may then be determined by the orientation of the major axis of the fitting ellipse. A bounding rectangle (or bounding box of another shape) is generated around the connected components of each segment, providing localization information for the detected object candidate. The bounding rectangles around sets of segments are used to create cropped images from the query image. Each cropped image represents at least one detected object candidate. Graph-based image segmentation step 820, region merging step 825, and object candidate generation step 830 may together form at least a portion of an object localization step”). convert the one or more image sections into one or more embedding vector representations, such that there is an embedding vector representation for each identified item; (Cordova-Diba: [0147] – “the normalized query image (e.g., normalized according to the process discussed with respect to FIG. 7) loaded in step 815 is received. A multi-channel two-dimensional Gabor filter bank is constructed. In this filter bank, two-dimensional Gabor functions are convolved with the normalized photometric invariant color space version of the query image, by rotating and scaling the basis Gabor function, resulting in a multi-resolution decomposition of the input query image in the spatial and spatial-frequency domains. The highest magnitude of these Gabor filter outputs over each channel may be used to represent a filter response. A feature vector is constructed based on the Gabor filter output”). match the retrieved image with at least one of the plurality of products based on comparison of the one or more embedding vectors associated with the one or more identified items and the one or more characteristics determined at least in part from the portion of the image not containing the one or more image sections of the one or more identified items with the product data stored in the product database; (Cordova-Diba: [0235] – “A feature vector can be constructed using the TF-IDF score of all terms in the description. The similarity of two items—i.e., the search word and the metadata—could be estimated using the cosine of their TF-IDF vectors. A search for shoes could be carried out by using keywords such as “high heels,” “black leather,” “open toe,” “straps,” to only match shoes with a high degree of similarity to the sought item”). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the invention of Yoshii disclosing the identification of items from an image with the reformatting of the image into an array, identifying and extracting image features to identify items with a boundary box, and matching the retrieved items based on comparing the vector as taught by Cordova-Diba. One of ordinary skill in the art would have been motivated to do so in order to locate visually similar objects by identifying objects without any added information (Cordova-Diba: [0008]). Yoshii in view of Cordova-Diba does not explicitly teach a system comprising: identify and extract image features to identify one or more items in a portion of the retrieved image not containing the one or more image sections, each image section associated with a particular one of the one or more identified items; determine one or more characteristics associated with the one or more identified items by analyzing the received image, said one or more characteristics determined, at least in part, by analyzing the portion of the image not containing the one or more image sections of the one or more identified items; match the retrieved image with at least one of the plurality of products based on and the one or more characteristics determined at least in part from the portion of the image not containing the one or more image sections of the one or more identified items. Notably, however, Yoshii does disclose outputting an image with the determined object identifiers in response to acquiring the object identifiers (Yoshii: [0062]) and Cordova-Diba does disclose analyzing the graph segmented image to detect potential objects and generating a bounding box for each detected object candidate (Cordova-Diba: [0138]). To that accord, Grossman does teach a system comprising: identify and extract image features to identify one or more items in a portion of the retrieved image not containing the one or more image sections, each image section associated with a particular one of the one or more identified items; (Grossman: [0048] – “additional features of the image analysis. As discussed above, context features (such as faces) may be identified in electronic images. A context feature 504 (a face) is illustrated surrounded by a rectangle in order to show the operation of the method. Similarly, an object text 506 (a name and number) is illustrated surrounded by a rectangle. The context feature 504 may be used to estimate an approximate size of the object, which may be used to identify or confirm the identification of the object as a basketball jersey. The object text 506 may likewise be used to confirm that the object is a basketball jersey, as well as providing addition information about the team and player associated with the object in order to generate better information for subsequence searching. Although the keyword phrases illustrated in FIGS. 5A-5B are related, other embodiments may present alternative keyword phrases associated with distinct objects within an image. For example, one keyword phrase may be associated with a shirt, while another keyword phrase may be associated with a basketball”). In summary, objects in sections of the image that do not show the identified item are used, such as a face in the image or another object, to estimate the size of the item or determine any other information regarding the identified item. determine one or more characteristics associated with the one or more identified items by analyzing the received image, said one or more characteristics determined, at least in part, by analyzing the portion of the image not containing the one or more image sections of the one or more identified items; (Grossman: [0048] – “The context feature 504 may be used to estimate an approximate size of the object, which may be used to identify or confirm the identification of the object as a basketball jersey. The object text 506 may likewise be used to confirm that the object is a basketball jersey, as well as providing addition information about the team and player associated with the object in order to generate better information for subsequence searching”). match the retrieved image with at least one of the plurality of products based on and the one or more characteristics determined at least in part from the portion of the image not containing the one or more image sections of the one or more identified items. (Grossman: [0084] – “The method 900 may begin by obtaining keywords associated with an object in an electronic image (block 902). The keywords may then be compared against a dataset of genre data to identify any matches (block 904). If no genres are found to match the keywords (block 906), the method 900 may terminate. If at least one genre match is found (block 906), the method 900 may next check for multiple genres match the keywords (block 908). If a plurality of genre matches are found (block 908), one of the genres may be selected from the plurality (block 910). Once one genre is identified, in some embodiments, the method 900 may include identifying one or more vendors associated with the genre (block 912). Based upon the identified genre, order options for purchase orders associated with the genre may also be identified”). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the invention of Yoshii in view of Cordova-Diba disclosing the identification of items from an image with the identifying and extracting image features from an image portion not containing the image section with the identified items, determining characteristics of the identified items from the image section not containing the identified items, and matching the retrieved image with at least one of a plurality of products from the characteristics as taught by Grossman. One of ordinary skill in the art would have been motivated to do so in order to assist a customer find a product that does not know all the salient features of the product (Grossman: [0003]). Regarding Claim 2: Yoshii in view of Cordova-Diba and Grossman discloses the limitations of claim 1 above. Yoshii further discloses wherein the image data extracted by each user device comprises an identifier of an image. (Yoshii: [0062] – “The terminal output unit 15 outputs an image and one or more object identifiers. There is no limitation on the output mode of an image and one or more object identifiers. The terminal output unit 15 preferably outputs one or more object identifiers in an image. The terminal output unit 15 outputs, for example, one or more object identifiers in an image at the positions indicated by the positional information of the object identifiers contained in the one or more pieces of object information”). Regarding Claim 4: Yoshii in view of Cordova-Diba and Grossman discloses the limitations of claim 1 above. Yoshii further discloses wherein each user device is configured to retrieve an image from the image source for display on a display of the user device. (Yoshii: [0075] – “if an object that is acquired is a photo that has been uploaded to an SNS server (e.g., Instagram (registered trademark) or Facebook (registered trademark)), and information indicating reaction (e.g., information indicating “Like”) to the photo or posted information (which may be referred to as an article) containing the photo is the additional information, the object information acquiring unit 221 preferably acquires object information in descending order of the “Like!” count. In this case, it is assumed that the object information acquiring unit 221 acquires information indicating reaction to the photo or posted information containing the photo, from the SNS server”). Regarding Claim 5: Yoshii in view of Cordova-Diba and Grossman discloses the limitations of claim 1 above. Yoshii further discloses wherein the image source is an external computing system in communication over a network with each user device. (Yoshii: [0075] – “object information acquiring unit 221 acquires information indicating reaction to the photo or posted information containing the photo, from the SNS server”). Regarding Claim 6: Yoshii in view of Cordova-Diba and Grossman discloses the limitations of claim 1 above. Yoshii further discloses wherein the product server is in communication with a plurality of external computing systems, and is configured to receive product data from each of the plurality of external computing systems. (Yoshii: [0065] – “The product information is, for example, a web page on which a product is introduced or purchased. There is no limitation on the structure of the product information. The product information may be in an external server apparatus. In the storage unit 20, information (e.g., a URL, an IP address, an API, etc.) for accessing an external server apparatus may be stored. In this case, an external server apparatus is accessed using information for accessing the external server apparatus, and product information and the like are acquired by the server apparatus”). Regarding Claim 9: Yoshii in view of Cordova-Diba and Grossman discloses the limitations of claim 1 above. Yoshii further discloses wherein the product server is configured to generate a modified image comprising at least one selectable portion corresponding respectively to the at least one product. (Yoshii: [0108] – “The terminal output unit 15 constructs information that is to be output, using the one or more object identifiers acquired in step S309. The terminal output unit 15 preferably constructs information that is to be output, using the in-image positional information contained in the one or more pieces of object information, such that the one or more object identifiers are output to proper positions in the image”). Regarding Claim 21: Yoshii in view of Cordova-Diba and Grossman discloses the limitations of claim 1 above. Yoshii in view of Cordova-Diba does not explicitly teach the product server configured to: detect a face pf a person within the image, the face is within the portion of the retrieved image not containing the one or more image sections of the one or more identified items; wherein at least one of the one or more characteristics is determined, at least in part, in accordance with the face detected within the image. Notably, however, Yoshii does disclose outputting an image with the determined object identifiers in response to acquiring the object identifiers (Yoshii: [0062]). To that accord, Grossman does teach the product server configured to: detect a face pf a person within the image, the face is within the portion of the retrieved image not containing the one or more image sections of the one or more identified items; (Grossman: [0048] – “As discussed above, context features (such as faces) may be identified in electronic images. A context feature 504 (a face) is illustrated surrounded by a rectangle in order to show the operation of the method. Similarly, an object text 506 (a name and number) is illustrated surrounded by a rectangle. The context feature 504 may be used to estimate an approximate size of the object, which may be used to identify or confirm the identification of the object as a basketball jersey”). wherein at least one of the one or more characteristics is determined, at least in part, in accordance with the face detected within the image. (Grossman: [0048] – “The context feature 504 may be used to estimate an approximate size of the object, which may be used to identify or confirm the identification of the object as a basketball jersey. The object text 506 may likewise be used to confirm that the object is a basketball jersey, as well as providing addition information about the team and player associated with the object”). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the invention of Yoshii in view of Cordova-Diba disclosing the system for identifying products and providing product information from an image with the detecting of a face and determining one or more characteristics as taught by Grossman. One of ordinary skill in the art would have been motivated to do so in order to generate better information for subsequence searching for the object (Grossman: [0048]). Regarding Claims 10 and 19: Claim 10 and 19 recite substantially similar limitations as claim 1. Therefore, claims 10 and 19 are rejected under the same rationale as claim 1 above. Regarding Claim 11: Claim 11 recites substantially similar limitations as claim 2. Therefore, claim 11 is rejected under the same rationale as claim 2 above. Regarding Claim 14: Claim 14 recites substantially similar limitations as claim 5. Therefore, claim 14 is rejected under the same rationale as claim 5 above. Regarding Claim 15: Claim 15 recites substantially similar limitations as claim 6. Therefore, claim 15 is rejected under the same rationale as claim 6 above. Regarding Claim 18: Claim 18 recites substantially similar limitations as claim 9. Therefore, claim 18 is rejected under the same rationale as claim 9 above. Regarding Claim 24: Claim 24 recites substantially similar limitations as claim 21. Therefore, claim 24 is rejected under the same rationale as claim 21 above. Claims 7 and 16 are rejected under 35 U.S.C. 103 as being unpatentable by the combination of Yoshii (US 20190325497 A1), Cordova-Diba (US 20160042251 A1), and Grossman (US 20180197223 A1) as applied to claim 6 above, in view of Chang (US 20190080207 A1). Regarding Claim 7: The combination of Yoshii, Cordova-Diba, and Grossman discloses the limitations of claim 6 above. The combination does not explicitly teach wherein the product server is configured to index the product data received from the plurality of external computing systems. Notably, however, Yoshii does disclose the product information being in an external server apparatus (Yoshii: [0065]). To that accord, Chang does teach wherein the product server is configured to index the product data received from the plurality of external computing systems. (Chang: [0119] – “The system captures or receives visual and/or text data from a third party 1102. In an example embodiment, the third party may include a user uploading images stored on a mobile device, or alternately the owner and/or operator of a mobile application and/or website. The read and listen module 1104 receives input text and subsequently detects and extracts, in the fashion situation, relevant brand/type, product category(s) and product attribute(s) keywords from the received input text at point 1106. The system constructs a query using only the brand/type and product category keywords. The system submits a constructed query to a third party retailer or merchant database 1108. A server receives product data from the third party and stores products, in the fashion example, according to brand/type, name(s), product image(s), category(s), description, details, price, size availability, inventory availability, color(s), material(s), retailer and/or carrier information”). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the invention of the combination of Yoshii, Cordova-Diba, and Grossman disclosing the display of product information that are identified from an image with the indexing of product data received from external systems as taught by Chang. One of ordinary skill in the art would have been motivated to do so in order to enable users to in order to learn from the item information for accurate matches (Chang: [0121]; [0042]). Regarding Claim 16: Claim 16 recites substantially similar limitations as claim 7. Therefore, claim 16 is rejected under the same rationale as claim 7 above. Claims 22-23 and 25-26 are rejected under 35 U.S.C. 103 as being unpatentable by the combination of Yoshii (US 20190325497 A1), Cordova-Diba (US 20160042251 A1), and Grossman (US 20180197223 A1) as applied to claim 1 above, in view of Sakata (US 20110208790 A1). Regarding Claim 22: The combination of Yoshii, Cordova-Diba, and Grossman discloses the limitations of claim 1 above. Yoshii in view of Cordova-Diba does not explicitly teach the product server configured to: detect a portion of a person visible within the image, the portion is within the portion of the retrieved image not containing the one or more image sections of the one or more identified items; determine a gender of the person based at least in part on the portion of the person visible within the image; wherein at least one of the one or more characteristics is determined, at least in part, in accordance with the gender of the person. Notably, however, Yoshii does disclose outputting an image with the determined object identifiers in response to acquiring the object identifiers (Yoshii: [0062]). To that accord, Grossman does teach detect a portion of a person visible within the image, the portion is within the portion of the retrieved image not containing the one or more image sections of the one or more identified items; (Grossman: [0048] – “As discussed above, context features (such as faces) may be identified in electronic images. A context feature 504 (a face) is illustrated surrounded by a rectangle in order to show the operation of the method. Similarly, an object text 506 (a name and number) is illustrated surrounded by a rectangle. The context feature 504 may be used to estimate an approximate size of the object, which may be used to identify or confirm the identification of the object as a basketball jersey”). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the invention of the Yoshii in view of Cordova-Diba disclosing the system for identifying products and providing product information from an image with the detecting a face in the portion of the image not containing the one or more identified items as taught by Grossman. One of ordinary skill in the art would have been motivated to do so in order to generate better information for subsequence searching for the object (Grossman: [0048]). The combination does not explicitly teach the product server configured to: determine a gender of the person based at least in part on the portion of the person visible within the image; wherein at least one of the one or more characteristics is determined, at least in part, in accordance with the gender of the person. Notably, however, Yoshii does disclose outputting an image with the determined object identifiers in response to acquiring the object identifiers (Yoshii: [0062]), and Grossman does disclose identifying faces in the image (Grossman: [0048]). To that accord, Sakata does teach the product server configured to: determine a gender of the person based at least in part on the portion of the person visible within the image; (Sakata: [0105] – “When a face area is extracted, the attribute information of the user such as the gender and age can be estimated by applying pattern recognition on the image information of the face area”). wherein at least one of the one or more characteristics is determined, at least in part, in accordance with the gender of the person. (Sakata: [0150] – “a user attribute, such as gender and age, lifestyle variables A, and lifestyle variables B# are stored in the lifestyle database 160 for each user”; Sakata: [0211] – “when a product is recommended in online shopping, the output control unit 210 presents an image content illustrating the product having appearance information that matches the lifestyle information”; Sakata: [0166] – “the relevance degree calculating unit 140 calculates a relevance degree of the user for each object (S130). For example, the relevance degree calculating unit 140 calculates a relevance degree of the user to an object so that the relevance degree of the user to the object increases as a distance between the user and the object is shorter, using the object image and the user image”). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the invention of the combination of Yoshii, Cordova-Diba, and Grossman disclosing the system for identifying products and providing product information from an image with the determining the gender of the user and characteristics of the item based on the gender as taught by Sakata. One of ordinary skill in the art would have been motivated to do so in order to recommend items that fit the lifestyle of the person (Sakata: [0007]). Regarding Claim 23: The combination of Yoshii Cordova-Diba, and Grossman discloses the limitations of claim 22 above. Yoshii in view of Cordova-Diba does not explicitly teach wherein the detected portion is a face of the person, and wherein the gender of the person is determined at least in part on the detected face of the person. Notably, however, Yoshii does disclose outputting an image with the determined object identifiers in response to acquiring the object identifiers (Yoshii: [0062]). To that accord, Grossman does teach wherein the detected portion is a face of the person; Grossman teaches determine gender information about the customer from the face identified in the image (Oh: [0129]; see also: [0094]; [0162]). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the invention of the Yoshii in view of Cordova-Diba disclosing the system for identifying products and providing product information from an image with the detecting a face in the portion of the image not containing the one or more identified items as taught by Grossman. One of ordinary skill in the art would have been motivated to do so in order to generate better information for subsequence searching for the object (Grossman: [0048]). The combination does not explicitly teach wherein the gender of the person is determined at least in part on the detected face of the person. Notably, however, Yoshii does disclose outputting an image with the determined object identifiers in response to acquiring the object identifiers (Yoshii: [0062]), and Grossman does disclose identifying faces in the image (Grossman: [0048]). To that accord, Sakata does teach wherein the gender of the person is determined at least in part on the detected face of the person. (Sakata: [0150] – “a user attribute, such as gender and age, lifestyle variables A, and lifestyle variables B# are stored in the lifestyle database 160 for each user”). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the invention of the combination of Yoshii, Cordova-Diba, and Grossman disclosing the system for identifying products and providing product information from an image with the gender being determined from the detected face as taught by Sakata. One of ordinary skill in the art would have been motivated to do so in order to recommend items that fit the lifestyle of the person (Sakata: [0007]). Regarding Claim 25: Claim 25 recites substantially similar limitations as claim 22. Therefore, claim 25 is rejected under the same rationale as claim 22 above. Regarding Claim 26: Claim 26 recites substantially similar limitations as claim 23. Therefore, claim 26 is rejected under the same rationale as claim 23 above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to TIMOTHY J KANG whose telephone number is (571)272-8069. The examiner can normally be reached Monday - Friday: 7:30 - 5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Maria-Teresa Thein can be reached at 571-272-6764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /T.J.K./Examiner, Art Unit 3689 /VICTORIA E. FRUNZI/Primary Examiner, Art Unit 3689 3/27/2026
Read full office action

Prosecution Timeline

Dec 18, 2020
Application Filed
Mar 09, 2021
Response after Non-Final Action
Aug 22, 2023
Non-Final Rejection — §103
Feb 26, 2024
Response Filed
Apr 17, 2024
Final Rejection — §103
Oct 22, 2024
Request for Continued Examination
Oct 23, 2024
Response after Non-Final Action
Nov 07, 2024
Non-Final Rejection — §103
May 14, 2025
Response Filed
Jun 30, 2025
Final Rejection — §103
Nov 20, 2025
Interview Requested
Dec 01, 2025
Applicant Interview (Telephonic)
Dec 01, 2025
Examiner Interview Summary
Jan 01, 2026
Request for Continued Examination
Feb 19, 2026
Response after Non-Final Action
Mar 27, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597058
IDENTIFICATION OF ITEMS IN AN IMAGE AND RECOMMENDATION OF SIMILAR ENTERPRISE PRODUCTS
2y 5m to grant Granted Apr 07, 2026
Patent 12541791
Qualitative commodity matching
2y 5m to grant Granted Feb 03, 2026
Patent 12468775
Assistance Method for Assisting in Provision of EC Abroad, and Program or Assistance Server For Assistance Method
2y 5m to grant Granted Nov 11, 2025
Patent 12469070
ITEM LEVEL DATA DETERMINATION DEVICE, METHOD, AND NON-TRANSITORY COMPUTER-READABLE MEDIA
2y 5m to grant Granted Nov 11, 2025
Patent 12456141
DEVICE AND METHOD FOR SELLING INFORMATION PROCESSING DEVICE
2y 5m to grant Granted Oct 28, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
46%
Grant Probability
72%
With Interview (+26.0%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 280 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month