DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the claims at issue are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the reference application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/forms/. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to http://www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Claim(s) 1-20 is/are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of patent US 11194952. Although the claims at issue are not identical, they are not patentably distinct from each other because the pending claim(s) is/are an obvious variation of the patented claims, or entirely covered by the patented claims.
For example, claim 1 of the instant application discloses A method for visualizing products in a single-page application comprising multiple steps. These limitations are all disclosed by the claim 1 of the patent US 11194952. Therefore, claim 1 of the instant application is covered by the claim 1 of the patent US 11194952, and is/are not patently distinct from the mentioned patent claim.
The following table illustrates a comparative mapping between the limitations of claim 1 of the instant application and the mapping claim 1 of patent US 11194952.
Claim 1 of the Instant Application 18752316
Claim 1 of the Patent 11194952
A method for visualizing products in a single-page application, comprising:
A method for visualizing products in a single-page application, comprising:
receiving, by an application system from a client system during rendering of a webpage received by the client system from a host system, a first request for a script associated with the host system, the script executable by the client system to perform operations comprising:
receiving, by an application system from a client system during rendering of a webpage received by the client system from a host system, a first request for a script associated with the host system, the script executable by the client system to cause the client system to perform operations comprising:
modifying the webpage to include a visualization application in response to a selection of a visualization control in the webpage by a user of the client system;
modifying the webpage to include a visualization control in the webpage; and
modifying the webpage to include a visualization application in response to a selection of the visualization control in the webpage by a user of the client system;
receiving, by the application system from the visualization application, a second request to display an augmented reality image; and
receiving, by the application system from the visualization application, a second request to display an augmented reality image; and
providing, by the application system to the client system, instructions to cause the display, by the client system and using a native tool of the client system, of the augmented reality image in image data of an environment, the augmented reality image depicting a product at a location in the environment.
providing, by the application system to the client system, instructions to cause the display, by the client system and using a native tool of the client system, of the augmented reality image in image data of an environment, the augmented reality image depicting a product at a location in the environment.
The following is a complete listing of the correspondence between the claim of the instant application and the patents:
Claims of the Instant Application 18752316
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Claims of the Patent 11194952
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 4, 7, 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Damy (US 20170132694) in view of Schweinfurth (US 11430051).
Regarding claim 1, Damy discloses A method for visualizing products in a single-page application, comprising: modifying the webpage to include a visualization application in response to a selection of a visualization control in the webpage by a user of the client system (Damy, fig.5a, “[0053] The system 102 includes, among others, components to support the operation of a wall art client/app 104 operating on any of mobile user device 114 or user stationary device 116 in conjunction with a wall art image server 106. The wall art system 102 further interoperates with a purchasing system 118 and third party art/image, i.e. wall art, server 120 to facilitate the selection and purchase of wall art products by a user. [0084] For example in FIG. 5a, the user is prompted in GUI 500 with a choice of menu items 510, 512 and 514, representing a collage layout selector 510 and ruler interface 512 and an application interface 514. In one embodiment, when the collage layout selection menu item 510 is selected, the user is further prompted to select an icon from a group of icons 510 representing the arrangement of the collage of items. Thus, if a user wants a curated layout of 5 items, then he selects the icon showing 5 items, and so forth”. Therefore, the layout selection menu item 510 corresponds to the visualization control);
receiving, by the application system from the visualization application, a second request to display an augmented reality image (Damy, “[0010] the system prompts the user to select a curated layout of items from any one or more curated layouts of a collage of items pre-arranged to fit within an area on the wall in proportion to the determined scale of the wall art environment. [0084] As can be seen in FIG. 5b, as the user selects the new icon showing 5 items (instead of the default selection of a single item illustrated in FIG. 5a), the background wall of the wall art environment dynamically updates to reflect the new selection, in this case going from one item to five items as reflected in the arrangement of the collage of items displayed on the background wall (partially obscured in FIG. 5b by icon group 510) and in the shopping interface 516”. Therefore, the user’s selection of 5 items corresponds to the second request); and
providing, by the application system to the client system, instructions to cause the display, by the client system and using a native tool of the client system, of the augmented reality image in image data of an environment, the augmented reality image depicting a product at a location in the environment (Damy, fig.5c, “[0010] Upon selection, the system positions the curated layout in the background of the wall art environment on the wall at a suitable location above the topmost edge of the furniture in the foreground so that the display of the curated layout of items simulates the appearance of being hung on the wall above the furniture in the user's furnished room or the predefined room type depending on which way the user decided to generate the wall art environment on their device. [0050] Although the processes or methods are described in terms of some sequential operations, it should be appreciated that some of the operations described may be performed in a different order. [0083] FIGS. 4f-4g, illustrate a user using their device to capture a home photo to use as their background wall environment in accordance with an embodiment of the wall art system. For example, the user in FIG. 4f views their room with their device's camera after having activated the camera interface 408 with the instructional prompt 430 on how to measure their furniture. The user in FIG. 4g then proceeds to compose and take the photo of their furniture and background wall using the measurement interface guideline superimposed over the camera interface. The photo captured with the camera then becomes the wall art environment and the user's device automatically returns to the wall art selection interface as described with reference to FIGS. 5a-5f. [0085] in FIG. 5c, the display of the GUI 500 reveals that a curated layout containing a 5-item collage has been selected; the selected curated layout 501 is displayed centered over and in proportion to the scale of the measured width of the furniture”).
On the other hand, Damy fails to explicitly disclose but Schweinfurth discloses receiving, by an application system from a client system during rendering of a webpage received by the client system from a host system, a first request for a script associated with the host system, the script executable by the client system to perform operations (Schweinfurth, col.4, lines 14-19, “the term “identification tag” refers to any image, text, string of characters, graphical object, or other identifier that can be visually depicted. The identification tag may be a barcode, QR code, URL, or other textual/graphical coding scheme that represents/corresponds to a consumer product”. Col.15, lines 55-62, “using the client application 266, the user may request and navigate a series of web pages, such as webpage 221 for instance, transmitted, preferably in a secure manner (e.g., using Hypertext Transfer Protocol Secure, known as “HTTPS”), by the proprietary server 202 to the client device 216. These web pages 221 may be interpreted and displayed via a web browser 270 of the client device 216”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Schweinfurth and Damy, to include all limitations of claim 1. That is, adding the step of display web pages based on the user’s request of Schweinfurth to the modifying the webpage of Damy. The motivation/ suggestion would have been to provide a collaborative environment between the client and server devices, and an improved experience for users to reorder, return, and/or review consumer products (Schweinfurth, col.2, lines 4-5).
Regarding claim 4, Damy in view of Schweinfurth discloses The method of claim 1.
Damy further discloses the visualization control is associated with the product (Damy, “[0084] As can be seen in FIG. 5b, as the user selects the new icon showing 5 items (instead of the default selection of a single item illustrated in FIG. 5a), the background wall of the wall art environment dynamically updates to reflect the new selection, in this case going from one item to five items as reflected in the arrangement of the collage of items displayed on the background wall (partially obscured in FIG. 5b by icon group 510) and in the shopping interface 516”).
Regarding claim 7, Damy in view of Schweinfurth discloses The method of claim 1.
Damy further discloses forwarding, by the visualization application to an analytics system, a notification in response to detection of an event (Damy, fig.9, “[0037] The stored user-curated wall can be used to share the user's selection and customization of wall art with others, and to re-generate the curated layout and wall art environment on a user device when desired. [0070] the device processes user input for sharing a user's curated wall, such as sharing an image of the curated wall, and optionally including a link to purchase the customized items comprising the curated wall with other users on social media, email recipients, message recipients, and the like”. For example, a notification of a shared curated wall is forwarded in response to the user’s sharing input).
Regarding claim 9, Damy in view of Schweinfurth discloses The method of claim 1.
Damy further discloses wherein the location is fixed with respect to the environment (Damy, “[0010] Upon selection, the system positions the curated layout in the background of the wall art environment on the wall at a suitable location above the topmost edge of the furniture in the foreground”. Therefore, the wall art location is fixed with respect to the environment).
Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over Damy (US 20170132694) in view of Schweinfurth (US 11430051), and further in view of Gaikwad et al. (US 20120310787).
Regarding claim 2, Damy in view of Schweinfurth discloses The method of claim 1.
On the other hand, Damy in view of Schweinfurth fails to explicitly disclose but Gaikwad discloses the product is randomly selected by the application system based on product availability information (Gaikwad, “[0012] the methods and systems described herein identify available products to be displayed, cluster the identified products based on their similarity to one another, randomly select one or more products from each of the clusters, and display information, such as a title, associated with the randomly selected products”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Gaikwad into the combination of Schweinfurth and Damy, to include all limitations of claim 2. That is, adding the randomly selecting based on available products of Gaikwad to select the product of Schweinfurth and Damy. The motivation/ suggestion would have been to maintain eligibility of some or all of the available products for selection and display (Gaikwad, [0033]).
Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Damy (US 20170132694) in view of Schweinfurth (US 11430051), and further in view of BAE (US 20180047012).
Regarding claim 3, Damy in view of Schweinfurth discloses The method of claim 1.
On the other hand, Damy in view of Schweinfurth fails to explicitly disclose but BAE discloses the product comprises a predetermined product (BAE, “[0052] The order information is information generated when a user orders a predetermined product through the webpage of the market server 600 or the ordering application”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined BAE into the combination of Schweinfurth and Damy, to include all limitations of claim 3. That is, adding the predetermined product of BAE to the products of Schweinfurth and Damy. The motivation/ suggestion would have been to provide products satisfying the user’s need and preference.
Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Damy (US 20170132694) in view of Schweinfurth (US 11430051), and further in view of Whiton et al. (US 8510043).
Regarding claim 5, Damy in view of Schweinfurth discloses The method of claim 1.
On the other hand, Damy in view of Schweinfurth fails to explicitly disclose but Whiton discloses identify, based on the first request, a context of the webpage; and generating the script based on the context provided in the first request (Whiton, col.5, lines 3-6, “Such executable instructions may comprise, for example, static or dynamic client-side script sent to the client device 1 by the server 10 in the context of a web page request”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Whiton into the combination of Schweinfurth and Damy, to include all limitations of claim 5. That is, applying the generating script based on the context in a request of Whiton to generate the script of Schweinfurth and Damy. The motivation/ suggestion would have been the client device 1 receives executable instructions from one or more servers 10 that cause the device 1 to display a graphical depiction (Whiton, col.4, line 67-col.5, line 2).
Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Damy (US 20170132694) in view of Schweinfurth (US 11430051), and further in view of Shmiel (US 20210334314).
Regarding claim 6, Damy in view of Schweinfurth discloses The method of claim 1.
On the other hand, Damy in view of Schweinfurth fails to explicitly disclose but Shmiel discloses wherein the script is generated from a template associated with a context provided in the first request (Shmiel, “[0020] receiving an input search query that satisfies a context template comprising a sequence of one or more words and a wildcard, wherein a wildcard represents variable data, wherein the input search query satisfies the context template and comprises a target word sequence that corresponds to the wildcard in the context template; and determining a plurality of sibling search queries for the input search query”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Shmiel into the combination of Schweinfurth and Damy, to include all limitations of claim 6. That is, applying the determining sibling query of Shmiel to generate the script of Schweinfurth and Damy. The motivation/ suggestion would have been to provide an improvement in the field of natural language processing (Shmiel, [0027]).
Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Damy (US 20170132694) in view of Schweinfurth (US 11430051), and further in view of UKISHIRO et al. (US 20220148269).
Regarding claim 8, Damy in view of Schweinfurth discloses The method of claim 1.
On the other hand, Damy in view of Schweinfurth fails to explicitly disclose but UKISHIRO discloses wherein the image data of the environment comprises a live stream of images acquired by the client system (UKISHIRO, “[0043] At this time, the photographing or filming in the direction toward the room wall by the user terminal 200 may be video filming or continuous shooting of still images. [0066] As processing of step S102, the image information transmission unit 252 of the user terminal 200 transmits the image information that is captured by photographing or filming a user's room with a camera function through a user's operation to the information providing apparatus 100”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined UKISHIRO into the combination of Schweinfurth and Damy, to include all limitations of claim 8. That is, applying the camera filming function of UKISHIRO to the client device of Schweinfurth and Damy. The motivation/ suggestion would have been to present a state in which candidate furniture for purchase is arranged in image information or video information captured by photographing or filming a room (UKISHIRO, [0002]).
Claim(s) 10-13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Damy (US 20170132694) in view of Schweinfurth (US 11430051), and further in view of Ayush et al. (US 20200273090).
Regarding claim 10, Damy discloses A method for visualizing products in a single-page application, comprising: modifying the webpage to include a visualization application in response to a selection of a visualization control in the webpage by a user of the client system (Damy, fig.5a, “[0053] The system 102 includes, among others, components to support the operation of a wall art client/app 104 operating on any of mobile user device 114 or user stationary device 116 in conjunction with a wall art image server 106. The wall art system 102 further interoperates with a purchasing system 118 and third party art/image, i.e. wall art, server 120 to facilitate the selection and purchase of wall art products by a user. [0084] For example in FIG. 5a, the user is prompted in GUI 500 with a choice of menu items 510, 512 and 514, representing a collage layout selector 510 and ruler interface 512 and an application interface 514. In one embodiment, when the collage layout selection menu item 510 is selected, the user is further prompted to select an icon from a group of icons 510 representing the arrangement of the collage of items. Thus, if a user wants a curated layout of 5 items, then he selects the icon showing 5 items, and so forth”. Therefore, the layout selection menu item 510 corresponds to the visualization control);
receiving, by the application system from the visualization application, a display request (Damy, “[0084] As can be seen in FIG. 5b, as the user selects the new icon showing 5 items (instead of the default selection of a single item illustrated in FIG. 5a), the background wall of the wall art environment dynamically updates to reflect the new selection, in this case going from one item to five items as reflected in the arrangement of the collage of items displayed on the background wall (partially obscured in FIG. 5b by icon group 510) and in the shopping interface 516”. Therefore, the user’s selection of 5 items corresponds to the second request).
On the other hand, Damy fails to explicitly disclose but Schweinfurth discloses receiving, by an application system from a client system during rendering of a webpage received by the client system from a host system, a first request for a script associated with the host system, the script executable by the client system to perform operations (Schweinfurth, col.4, lines 14-19, “the term “identification tag” refers to any image, text, string of characters, graphical object, or other identifier that can be visually depicted. The identification tag may be a barcode, QR code, URL, or other textual/graphical coding scheme that represents/corresponds to a consumer product”. Col.15, lines 55-62, “using the client application 266, the user may request and navigate a series of web pages, such as webpage 221 for instance, transmitted, preferably in a secure manner (e.g., using Hypertext Transfer Protocol Secure, known as “HTTPS”), by the proprietary server 202 to the client device 216. These web pages 221 may be interpreted and displayed via a web browser 270 of the client device 216”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Schweinfurth and Damy. That is, adding the step of display web pages based on the user’s request of Schweinfurth to the modifying the webpage of Damy. The motivation/ suggestion would have been to provide a collaborative environment between the client and server devices, and an improved experience for users to reorder, return, and/or review consumer products (Schweinfurth, col.2, lines 4-5).
On the other hand, Damy in view of Schweinfurth fails to explicitly disclose but Ayush discloses identifying a location in a first image associated with the display request (Ayush, “[0050] the viewpoint selection module 210 can determine when the location of the client device is static (or relatively static) and capture a frame of a camera feed received from the client device at a time instant when the client device satisfies a stillness (and/or movement) threshold. [0054] To generate proposal regions of a captured viewpoint, the object identification module 220 implements a region proposal algorithm as part of the object detection network to hypothesize object locations within the viewpoint”. Therefore, an object location is identified in the camera feed associated with the client device which satisfies a stillness threshold);
providing, to the client system for display in the visualization application, an indication of the location (Ayush, “[0054] the object identification module 220 takes a viewpoint as an input image for the R-CNN and generates object proposals (such as bounding boxes around identified objects 815a-815e of FIG. 8) with corresponding confidence scores and object labels. [0056] the object identification module 220 can generate a bounding box represented by two coordinate pairs, one for the top-left corner (x1, y1) and another for the bottom-right corner (x2, y2). [0059] the object identification module 220 takes the viewpoint and outputs the bounding boxes which indicate locations of real-world objects”);
receiving a selection of the indication (Ayush, “[0056] for each bounding box corresponding to a different object identified within the viewpoint… each bounding box b.sub.i has a corresponding object label l.sub.i and confidence score c.sub.i. [0124] the object with the highest overall incompatibility energy is selected as the least compatible object in a viewpoint”. Since each identified object corresponds a bonding box and an object label, selecting an object indicates selecting a bounding box or an object label); and
providing, based on the selection, instructions to the visualization application for displaying an image of a product associated with the location in the location (Ayush, “[0079] the one or more candidate products may be selected based on satisfying a compatibility threshold in relation to 3D models corresponding to real-world objects other than the least compatible object. For example, the multiple candidate products may have (or be of) a same product class or type as the least compatible object. [0082] to remove the least compatible object (422), the object compatibility and retargeting service, performs a context aware removal process [0083] the object compatibility and retargeting service embeds candidate product 3D models on or in the viewpoint at the same location in the viewpoint with same pose and scale as the removed least compatible object”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Ayush into the combination of Schweinfurth and Damy, to include all limitations of claim 10. That is, adding the above steps of Ayush to the application system of Schweinfurth and Damy. The motivation/ suggestion would have been utilizing machine learning and artificial neural networks to identify and replace the least compatible objects in digital representations of real-word environments (Ayush, [0001]).
Regarding claim 11, Damy in view of Schweinfurth and Ayush discloses The method of claim 10.
On the other hand, Damy in view of Schweinfurth fails to explicitly disclose but Ayush discloses identifying a product type corresponding to the location (Ayush, “[0054] the object identification module 220 takes a viewpoint as an input image for the R-CNN and generates object proposals (such as bounding boxes around identified objects 815a-815e of FIG. 8) with corresponding confidence scores and object labels. [0056] An object label describes the type of object identified within the viewpoint (e.g., chair, sofa, handbag, skirt, etc.)”); and
the instructions are provided based in part on the product type corresponding to the location (Ayush, “[0079] the multiple candidate products may have (or be of) a same product class or type as the least compatible object. Alternatively, the candidate products may be all of the products in the product data store or some other subset of the products, e.g., including multiple similar product classes or types (or some combination or variation of product subtypes)”). The same motivation of combining Ayush in claim 10 applies here.
Regarding claim 12, Damy in view of Schweinfurth and Ayush discloses The method of claim 10.
On the other hand, Damy in view of Schweinfurth fails to explicitly disclose but Ayush discloses performing semantic segmentation of the first image to identify features in the first image; and identifying the location from at least one of the identified features (Ayush, “[0056] the object identification module 220 can generate a bounding box represented by two coordinate pairs, one for the top-left corner (x1, y1) and another for the bottom-right corner (x2, y2). [0057] To implement the Fast R-CNN, the object identification module 220 can utilize the networks and techniques described in Ross Girshick, Jeff Donahue, Trevor Darrell, Jitendra Malik, Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation, UC Berkeley (2014), which is incorporated herein by reference in its entirety.”). The same motivation of combining Ayush in claim 10 applies here.
Regarding claim 13, Damy in view of Schweinfurth and Ayush discloses The method of claim 10.
On the other hand, Damy in view of Schweinfurth fails to explicitly disclose but Ayush discloses using a machine learning model to identify the location in the first image (Ayush, “[0056] The object identification module 220 further utilizes the predicted object bounds and object quality region proposals as input for the Fast R-CNN to detect the objects within the viewpoint. For example, the object identification module 220 can generate a bounding box represented by two coordinate pairs, one for the top-left corner (x1, y1) and another for the bottom-right corner (x2, y2).”). The same motivation of combining Ayush in claim 10 applies here.
Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Damy (US 20170132694) in view of Schweinfurth (US 11430051), and further in view of Ayush et al. (US 20200273090) and SCHÄFER et al. (US 20190213451).
Regarding claim 14, Damy in view of Schweinfurth and Ayush discloses The method of claim 10.
On the other hand, Damy in view of Schweinfurth and Ayush fails to explicitly disclose but SCHÄFER discloses providing the first image to a convolutional neural network selected based on at least one of a type of the object, the webpage, or the host system (SCHÄFER, “[0042] In such a situation, the selection of one of the plurality of particularized convolutional neural networks 40 to use for the processing of the image data based is further based on what type of roads the vehicle 26 traveling on”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined SCHÄFER into the combination of Schweinfurth and Damy, Ayush, to include all limitations of claim 14. That is, applying the selecting CNN for the processing of the image data of SCHÄFER to the image with products of Schweinfurth and Damy, Ayush. The motivation/ suggestion would have been to provide a system to detect and classify objects in an image (SCHÄFER, [0005] a system for highly automated driving of a vehicle to detect and classify pedestrians and traffic signs and other vehicles is provided).
Claim(s) 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Damy (US 20170132694) in view of Schweinfurth (US 11430051), and further in view of Ayush et al. (US 20200273090) and LIU et al. (US 20210365749).
Regarding claim 17, Damy in view of Schweinfurth and Ayush discloses The method of claim 10.
On the other hand, Damy in view of Schweinfurth and Ayush fails to explicitly disclose but LIU discloses obtaining, by the application system from local storage associated with a browser of the client system, session information for the user (LIU, “[0036] As shown in FIG. 2a, a user may open an instant messaging application (for example, QQ or WeChat) on the terminal device 100a, and click/tap a session page 2a1 corresponding to any contact or group (which may be understood as a platform on which people (e.g., with the same hobbies or attributes) get together to chat and communicate with each other) in the instant messaging application”); and
obtaining, by the application system, the first image using the session information (LIU, “[0034] Using the terminal device 100a as an example, on a session page of an instant messaging application, the terminal device 100a may obtain, in response to an expression image trigger operation of a user, image data (if an operation object operated by the user is image data) associated with the expression image trigger operation”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined LIU into the combination of Schweinfurth and Damy, Ayush, to include all limitations of claim 17. That is, applying the obtaining image of LIU to the method of Schweinfurth and Damy, Ayush. The motivation/ suggestion would have been to provide an image data processing method and apparatus, an electronic device, and a storage medium, to improve image data processing efficiency (LIU, [0005]).
Claim(s) 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Damy (US 20170132694) in view of Schweinfurth (US 11430051), and further in view of Ayush et al. (US 20200273090) and Criddle et al. (US 12437335).
Regarding claim 19, Damy in view of Schweinfurth and Ayush discloses The method of claim 10.
On the other hand, Damy in view of Schweinfurth and Ayush fails to explicitly disclose but Criddle discloses providing a set of potential products to a vendor system including the product; and receiving a selection of subset from the vendor system for display on the webpage by the visualization application (Criddle, col.22, lines 41-43, “At 806, the acquisition service 104B may receive requestor input specifying requirements for acquiring items (e.g., requirements of the requestor 108B for items)”. Col.22, lines 58-62, “At 808, the acquisition service 104B may provide requirements received from a requestor to vendors. For example, the acquisition service 104B may provide an interface (e.g., the interface 302 of FIG. 3) to a vendor to enable the vendor to specify an ability to fulfill requirements”. Col.22, lines 63-67, “At 810, the acquisition service may receive input from a vendor(s) specifying which requirements the vendor(s) is able to fulfill. For instance, the interface 302 may be provided to the vendor 106B. The vendor 106B can then specify if they can fulfill the requirement, they cannot fulfill the requirement, or if they can partially fulfill the requirement”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Criddle into the combination of Schweinfurth and Damy, Ayush, to include all limitations of claim 19. That is, adding the vendor selection of the items of Criddle to the products of Schweinfurth and Damy, Ayush. The motivation/ suggestion would have been to provide an acquisition service platform to enable requestors to select entities (e.g., vendors, sellers, merchants, service providers, etc.) for fulfilling requirements of the requestors for items (Criddle, col.3, lines 4-7).
Claim(s) 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Damy (US 20170132694) in view of Schweinfurth (US 11430051), and further in view of Ayush et al. (US 20200273090) and Glazebrook (US 20200395769).
Regarding claim 20, Damy in view of Schweinfurth and Ayush discloses The method of claim 10.
On the other hand, Damy in view of Schweinfurth and Ayush fails to explicitly disclose but Glazebrook discloses the host system comprises an interactive kiosk machine including a QR code scanner; and the operations further comprise: scanning an QR code using the QR code scanner; and uploading, based on the scanned QR code, the first image (Glazebrook, “[0119] According to at least one embodiment of the present disclosure, the mobile device holder may be used to connect a kiosk to a scanner device. The scanner device could be for example, a bar code scanner, a QR-code scanner, or an image scanner. The kiosk and the scanner, combined by the mobile device holder, allows a user to scan items, bar or QR-codes, or other images and upload them for further processing”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Glazebrook into the combination of Schweinfurth and Damy, Ayush, to include all limitations of claim 20. That is, adding the process of uploading image of Glazebrook to the method of Schweinfurth and Damy, Ayush. The motivation/ suggestion would have been to provide a mobile device holder for joining electronic and non-electronic devices (Glazebrook, [0002]).
Allowable Subject Matter
Claim(s) 15, 16, 18 is/are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Regarding claim 15, it recites, classifying pixels in the first image as corresponding to a surface depicted in the first image; and identifying the location based on the classified pixels in the first image. None of the prior arts on the record or any of the prior arts searched, alone or in combination, renders obvious the combination of elements recited in the claim(s) as a whole.
Regarding claim 18, it recites, detecting a device parameter of the client system; and generating a modified first image based on the detected device parameter; and the instructions comprise instructions for displaying the image of the product at the location in the modified first image. None of the prior arts on the record or any of the prior arts searched, alone or in combination, renders obvious the combination of elements recited in the claim(s) as a whole.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GRACE Q LI whose telephone number is (571)270-0497. The examiner can normally be reached Monday - Friday, 8:00 am-5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, DEVONA FAULK can be reached at 571-272-7515. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GRACE Q LI/Primary Examiner, Art Unit 2618
3/6/2026