Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on has been entered.
Response to Arguments
Applicant’s arguments with respect to the claim have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. That is, the cited reference Sharma discloses the limitations of the claim as amended. Therefore, the claims stand as properly addressed and rejected below.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4, 6, and 25-27 are rejected under 35 U.S.C. 103 as being unpatentable over Gruttadauria et al., US 2008/0043013 A1 (hereinafter “Gruttadauria”) in view of Sorensen, US 2006/0200378 A1 (hereinafter “Sorensen”) further in view of Sharma et al., US 11,615,430 B1 (hereinafter “Sharma”).
Regarding claim 1, Gruttadauria discloses a goods display assistance system (generally, Abstract, [0006]-[0007] shopping environment design system for producing a three-dimensional image including design elements for product placement on shelves; FIG. 1, and [0030] computing environment 100 including a virtual reality system) comprising:
at least one memory storing instructions (FIG. 1, storage 124, and memory 128 [0031] [0035]); and
at least one processor (FIG. 1, CPUs 122 at [0031] connected to the storage 124 and memory 128) configured to execute the instructions ([0029] and [0031]) to:
display a predetermined goods display ([0006]-[0007], three-dimensional image being selected from one or more of product displays, shelf layouts, product placement on shelves, aisle configuration, etc.), in a store ([0005]-[0007] three-dimensional image is of a retail store) in a virtual space ([0006]-[0007] three-dimensional image is in an immersive virtual environment, further at [0025], [0032] [0036]) on a head mounted display (FIG. 2, head mounted display 225 at [0038]-[0039]) worn by a user (FIG. 2 [0038]-[0039] individual/user wearing the head mounted display 225, user may wear a headset);
detect a movement (FIG. 2 motion sensing devices 207 including goggles and eye movement tracking device 215 at [0037]-[0039]) of a line of sight of the user (FIG. 2, eye movement tracking device 215 at [0037]-[0039], movement of the sphere; [0047] eye-tracking tool observes the motion of the wearer’s eyes to determine direction of eye, and motion sensing 207 all of which to determine field of view was being looked at); and
determine a new goods display (FIGS. 2-3 and [0008] describing a theme area and an activity-based product grouping; [0039] participant interaction and movement results in corresponding visual experience for the user of the headset; and [0062]-[0063] new layout) based on the identified times (FIGS. 2-3 and [0037]-[0039] [0047] allows the user to observe in the field of view the products on the shelf in virtual environment based on eye-tracking results, goods information updated/corresponds with participant’s movement; [0059] describing time spent exploring a targeted portion of the aisle; the time data may be used in determining new layouts [0062]-[0066]).
However, Gruttadauria does not explicitly disclose identify, for each product of a plurality of products, a time from when the user enters the store to when the line of sight is directed to the product in the predetermined goods display based on the movement of the line of sight, wherein the new goods display displays the plurality of products arranged in ascending order according to the time from when the user enters the store to when the line of sight is directed to each respective product, at a position that is easy for a customer to see, and wherein the position is a predetermined position.
In the same field of endeavor, Sorensen discloses identify, for each product of a plurality of products, a time from when the user enters the store to when the line of sight is directed to the product in the predetermined goods display based on the movement of the line of sight (FIG. 7 and [0054]-[0058] shopper path enters the shopping environment travels to a region from which one or more products are purchased and to POS, and data analyzer 16 configured to recognize patterns based on the time duration of the shopping trip as well as physical location of the shopper path, determination of visibility measure 112 at [0058] determining product from field of view, further at [0060]-[0064] measuring other time data analysis in shopper behavior including dwell times and conversion times).
Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to modify the modifiable goods display system of Gruttadauria to incorporate the shopper destination pattern determination of Sorensen because the references are within the same field of endeavor, namely, modifying product placement in a retail environment based on user data including viewing direction. The motivation to combine these references would have been to gather inexpensive and accurate shopper habit data in light of product placement within a shopping environment thereby improving the marketable information (see Sorensen at [0003]-[0005]). Therefore, a person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success.
However, Gruttadauria in view of Sorensen does not explicitly disclose wherein the new goods display displays the plurality of products arranged in ascending order according to the time from when the user enters the store to when the line of sight is directed to each respective product, at a position that is easy for a customer to see, and wherein the position is a predetermined position.
In the same field of endeavor, Sharma discloses wherein the new goods display displays the plurality of products arranged in ascending order according to the time from when the user enters the store to when the line of sight is directed to each respective product (FIGS. 2-7 and col. 8, lines 9-end through col. 9, lines 1-30, further at col. 10, lines 38-end describing trip vector determination and path to purchase descriptor extraction at col. 11, lines 48 though col. 12, lines 1-43, and describing ranking and placement in accordance at least at FIG. 7 and col. 13, lines 15-end and producing product placement accordingly, and ranking described therein), at a position that is easy for a customer to see (FIGS. 13-16 and col. 20, lines 18-end and col. 21, lines 1-29 describing modeling and product placement in accordance with gathered data based on ranking of product and shopping behavior), and wherein the position is a predetermined position (placement determination based on marketing information and known high traffic areas as disclosed at col. 3, lines 12-36 and FIGS. 13-16 and col. 20, lines 18-end and col. 21, lines 1-29).
Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to modify the modifiable goods display system of Gruttadauria in view of Sorensen to incorporate the trip data and consumer demographic information as disclosed by Sharma because the references are within the same field of endeavor, namely, modifying product placement in a retail environment based on user data including user movements and times spent in various ways within the store. The motivation to combine these references would have been to optimize a virtual shopping environment customized for each user (Sharma at col. 3, lines 13-end). Therefore, a person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success.
Regarding claim 2, Gruttadauria in view of Sorensen further in view of Sharma discloses the goods display assistance system (Gruttadauria FIG. 1, 100) according to claim 1 (see above), wherein the at least one processor (Gruttadauria FIG. 1, CPUs 122) is further configured to execute the instructions (Gruttadauria [0029] and [0031]) to:
determine the new goods display (Gruttadauria [0057]-[0058]) in which at least one of a type of a shelf (see below, condition satisfied), a color of the shelf (see below, condition satisfied), a size of the shelf (see below, condition satisfied), the number of stages of the shelf (see below, condition satisfied), a goods display position of a product of the plurality of products displayed on the shelf (see below, condition satisfied), a product of the plurality of products displayed on the shelf (Gruttadauria FIGS. 2-3 and [0006]-[0007] and [0025]-[0027] and [0034] product placement on shelves, and [0057]-[0058] proposed alternatives – shelf layout), lighting on at least one of the shelf and the product (Gruttadauria [0051], [0053] and [0057]-[0058] proposed alternatives – lighting for store and thereby product and shelf), an advertisement material of the product (see above and below, condition satisfied), and decoration for at least one of the shelf and the product (Gruttadauria [0057]-[0058], product packaging modified) is changed (see above, Gruttadauria changes at FIGS. 2-3 and [0057]-[0060] and [0006]-[0007]).
Regarding claim 3, Gruttadauria in view of Sorensen further in view of Sharma discloses the goods display assistance system according to claim 1 (see above), wherein the at least one processor (Gruttadauria FIG. 1, CPUs 122) is further configured to execute the instructions (Gruttadauria [0029] and [0031]) to:
display the determined new goods display on the head mounted display (Gruttadauria [0051], [0053], and [0057]-[0058] modified store elements including shelves, layout and products displayed to user on the head mounted display FIG. 2 and head mounted display 225 and [0038]-[0039] and [0047].
Regarding claim 4, Gruttadauria in view of Sorensen further in view of Sharma discloses the goods display assistance system according to claim 1 (see above), wherein the at least one processor (FIG. 1, CPUs 122) is further configured to execute the instructions (Gruttadauria [0029] and [0031]) to:
display the determined new goods display on a head mounted display worn by another user different from the user (Gruttadauria [0058]-[0059] describing more than one user (i.e., other users) interacting, and head mounted displays 225 of FIG. 2 described at [0025] and [0038]-[0039] and [0047]).
Regarding claim 6, Gruttadauria in view of Sorensen further in view of Sharma discloses the goods display assistance system according to claim 1 (see above), wherein the at least one processor (Gruttadauria FIG. 1, CPUs 122) is further configured to execute the instructions (Gruttadauria [0029] and [0031]) to:
output the determined goods display as a goods display candidate (Gruttadauria FIGS. 2-3 and [0006]-[0007] and [0054]-[0059], providing optional layouts, product placement, etc. to user to determine preferences and gather data) in a store (Gruttadauria FIGS. 2-3 [0006]-[0007] the goods display candidate is of a current real layout of retail store modified to be tested on the user in the virtual space) in a real space (Gruttadauria [0006]-[0007] is layout of a real retail store in a real space; proposed changes at [0057]-[0060]).
Regarding claim 25, it is similar in scope to claim 1 above, the only difference being claim 25 is directed to a goods display assistance method (see Gruttadauria at [0008] and [0050] and FIG. 3 generally). Therefore, claim 25 is similarly analyzed and rejected as claim 1.
Regarding claim 26, it is similar in scope to claim 1 above, the only difference being claim 25 is directed to a non-transitory computer-readable recording medium (Gruttadauria and [0028] storage 124, memory 128) that records a program (Gruttadauria at [0028]-[0029] and [0031]), the program causing a computer to execute the processes (Gruttadauria, FIG. 1 computer system 105, server system 120 and virtual reality user interaction devices 112 at [0029]-[0032]). Therefore, claim 26 is similarly analyzed and rejected as claim 1.
Regarding claim 27, Gruttadauria in view of Sorensen further in view of Sharma discloses the goods display assistance system according to claim 1 (see above), wherein the new goods display includes a shelf in which products of the plurality of products that the user pays attention to are collected (Gruttadauria at [0047]-[0055] for eye tracking determination and shelves layouts and configurations accordingly, of which time spent is a factor in creating shelf layout, Sorensen at describing dwell and conversion times at [0063]-[0064] FIG. 7 and [0054]-[0058] shopper path enters the shopping environment travels to a region from which one or more products are purchased and to POS, and data analyzer 16 configured to recognize patterns based on the time duration of the shopping trip as well as physical location of the shopper path, determination of visibility measure 112 at [0058] determining product from field of view, further at [0060]-[0063] measuring other time data analysis in shopper behavior), and the at least one processor is further configured to execute the instructions to: display the determined new goods display on the head mounted display when the user approaches a payment device (Gruttadauria at [0049]-[0052] and [0055]-[0059] describing displaying shelves accordingly via a head mounted display based on various layouts created from data collected).
Claims 5 is rejected under 35 U.S.C. 103 as being unpatentable over Gruttadauria in view of Sorensen further in view of Sharma as applied to claim 4 above, further in view of Perkins, US 2008/0162262 A1 (hereinafter “Perkins”).
Regarding claim 5, Gruttadauria in view of Sorensen further in view of Sharma discloses the goods display assistance system according to claim 4 (see above).
However, Gruttadauria in view of Sorensen further in view of Sharma does not explicitly disclose wherein the user and the another user have similar physical information in the virtual space.
In the same field of endeavor, Perkins discloses wherein the user and the another user have similar physical information in the virtual space (Perkins at [0053] physical information such as height is conceivably the same for two users and applied as such in the virtual space).
Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to modify the virtual shopping display system of Gruttadauria in view of Sorensen further in view of Sharma to incorporate the user profile data as disclosed by Perkins because the references are within the same field of endeavor, namely, modifiable virtual shopping display systems using a head mounted display device. The motivation to combine these references would have been to improve the quality of the virtual reality simulation for the user/participants (see Perkins at least at [0010] and [0052] – last sentence). Therefore, a person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success.
Claims 7-8 are rejected under 35 U.S.C. 103 as being unpatentable over Gruttadauria in view of Sorensen further in view of Sharma as applied to claim 1, further in view of Yamamoto et al., US 2019/0191994 A1 (hereinafter “Yamamoto”).
Regarding claim 7, Gruttadauria in view of Sorensen further in view of Sharma discloses the goods display assistance system according to claim 1 (see above), wherein the at least one processor (Gruttadauria, FIG. 1, CPUs 122) is further configured to execute the instructions (Gruttadauria, [0029] and [0031]) to:
identify a time during which the line of sight is directed to the predetermined goods display based on the movement of the line of sight (Gruttadauria, FIGS. 2-3 and [0059] time spent exploring a targeted portion of an aisle), and
determine the new goods display based on the identified time for each product (Gruttadauria, FIGS. 2-3 and [0057]-[0060] proposed changes input into the associated computing system that may be observed and explored by the user to gather additional feedback from the user activities collected).
However, Gruttadauria in view of Sorensen further in view of Sharma does not explicitly disclose the line of sight is directed to each product in the goods display.
In the same field of endeavor, Yamamoto discloses line of sight is directed to each product in the goods display (Yamamoto at [0039]-[0040]).
Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to modify the modifiable goods display system of Gruttadauria in view of Sorensen further in view of Sharma to incorporate the line of sight product time gaze as disclosed by Yamamoto because the references are within the same field of endeavor, namely, modifying product placement in a retail environment based on user data including eye gaze and direction and time. The motivation to combine these references would have been to improve layout of a product shelf in a store (see Yamamoto at least at [0039]). Therefore, a person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success.
Regarding claim 8, Gruttadauria in view of Sorensen further in view of Sharma discloses the goods display assistance system according to claim 1 (see above)wherein the at least one processor (Gruttadauria, FIG. 1, CPUs 122) is further configured to execute the instructions (Gruttadauria, [0029] and [0031]) to:
identify the line of sight is directed to the predetermined goods display based on the movement of the line of sight (Gruttadauria, FIG. 2, eye tracking 225, [0047] monitoring eyes of participants), and
determine the new goods display (Gruttadauria, [0059] proposed changes input into the associated computing system that may be observed and explored by the user to gather additional feedback from the user activities collected).
However, Gruttadauria in view of Sorensen does not explicitly disclose the number of times the line of sight is directed to each product in the goods display; and determine the new goods display based on the identified number of times for each product.
In the same field of endeavor, Yamamoto discloses the number of times the line of sight is directed to each product (Yamamoto, [0039]-[0040] frequency of product view); and determine the new goods display based on the identified number of times for each product (Yamamoto at [0039]-[0040], layout of shelf modified/improved based on frequency of viewing of the product).
Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to modify the modifiable goods display system of Gruttadauria in view of Sorensen further in view of Sharma to incorporate the line of sight frequency as disclosed by Yamamoto because the references are within the same field of endeavor, namely, modifying product placement in a retail environment based on user data including eye gaze and direction. The motivation to combine these references would have been to improve layout of a product shelf (see Yamamoto at least at [0039]). Therefore, a person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success.
Claims 10-12 rejected under 35 U.S.C. 103 as being unpatentable over Gruttadauria in view of Sorensen further in view of Sharma as applied to claim 1, further in view of Lu et al., US 2023/0133891 A1 (hereinafter “Lu”).
Regarding claim 10, Gruttadauria in view of Sorensen further in view of Sharma discloses the goods display assistance system according to claim 1 (see above), wherein the at least one processor (Gruttadauria, FIG. 1, CPUs 122) is further configured to execute the instructions (Gruttadauria , [0029] and [0031]).
However, Gruttadauria in view of Sorensen further in view of Sharma does not explicitly disclose score each product in the predetermined goods display based on the detected movement of the line of sight; and output a score of each product.
In the same field of endeavor, Lu discloses a virtual reality based commerce experience (Abstract) configured to score each product in the predetermined goods display based on the detected movement of the line of sight (Lu, FIGS. 2-3 and [0039]-[0046], score of product is dependent on time spent viewing a product by tracking gaze time); and output a score of each product (Lu, FIGS. 2-3 and [0046]-[0048], score of product is dependent on time spent viewing a product by tracking gaze time).
Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to modify the modifiable goods display system of Gruttadauria in view of Sorensen further in view of Sharma to incorporate scoring component of Lu because the references are within the same field of endeavor, arrangement of goods in a virtual environment based on viewing direction data. The motivation to combine these references would have been to refresh shelves, orient products to increase sales and improve ease of finding products users desire (Lu at [0013]-[0014]). Therefore, a person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success.
Regarding claim 11, Gruttadauria in view of Sorensen further in view of Sharma further in view of Lu discloses the goods display assistance system according to claim 10 (see above), wherein the at least one processor (Gruttadauria, FIG. 1, CPUs 122) is further configured to execute the instructions (Gruttadauria, [0029] and [0031]) to:
perform scoring in such a way that the longer a time during which the line of sight is directed to a product among the plurality of products in the predetermined goods display based on the movement of the line of sight is, the higher a score is for the product (Lu, FIGS. 2-3 [0023] [0039]-[0048] and [0059]-[0060] describing time spent – eye gaze time spent can be used to determine the first product should be displayed in the first position).
Regarding claim 12, Gruttadauria in view of Sorensen further in view of Sharma further in view of Lu discloses the goods display assistance system according to claim 10 (see above), wherein the at least one processor (Gruttadauria, FIG. 1, CPUs 122) is further configured to execute the instructions (Gruttadauria, [0029] and [0031]) to:
perform scoring in such a way that the larger the number of times the line of sight is directed to a product among the plurality of products in the predetermined goods display based on the movement of the line of sight is, the higher a score is for the product (Lu, FIGS. 2-3 [0039]-[0048] and [0059]-[0060] describing number of times interacted with a product including eye-gaze time, which can be used to determine the first product should be displayed in the first position – and scored as such).
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Gruttadauria in view of Sorensen further in view of Sharma further in view of Lu as applied to claim 10, and further in view of Sorensen, US 2019/0318372 A1 (hereinafter “Sorensen ‘372”).
Regarding claim 13, Gruttadauria in view of Sorensen further in view of Sharma further in view of Lu discloses the goods display assistance system according to claim 10 (see above), wherein the at least one processor (Gruttadauria, FIG. 1, CPUs 122) is further configured to execute the instructions (Gruttadauria, [0029] and [0031]) to:
perform scoring based on the line of sight is directed to a product among the plurality of products in the predetermined goods display based on the movement of the line of sight is, the higher a score is for the product (Lu, FIGS. 2-3 [0039]-[0048] and [0059]-[0060] describing time spent – eye gaze time spent can be used to determine the first product should be displayed in the first position).
However, Gruttadauria in view of Sorensen further in view of Sharma further in view of Lu does not explicitly disclose scoring in such a way that the shorter a time from when the user enters the store to when the line of sight is directed to a product.
In the same field of endeavor, Sorensen ‘372 discloses a physical shopping environment (FIG. 1 , 1) in which the system is configured for determining shopping behavior including length of a time from when the user enters the store to when the line of sight is directed to a product (Sorensen ‘372, [0013]-[0015] recording entrance and exit times for sample shoppers 16 in FIG. 1, tracking view vectors of shoppers 16 and 20 with respective objects and displays 22 being viewed at [0021]-[0024] along with eye tracking at [0035]-[0037], [0002] total shopping time may be allocated to each viewing vectors to determine allocated time and an exposure value of the product; it would be obvious to one of ordinary skill in the art that with calculated entrance/exit times and calculated times of each viewing vector (i.e., line of sight) to each product and total exposure time, the time from the entrance to each product can be determined, in view of the eye-gaze time dependent input of Lu).
Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to modify the modifiable goods display system of Gruttadauria in view of Sorensen further in view of Sharma further in view of Lu to incorporate the time measurement system of Sorensen because the references are within the same field of endeavor, namely, modifying product placement in a retail environment based on user data including viewing direction. The motivation to combine these references would have been to improve layout of a retail store and product placement, thereby improving the shopping experience for shoppers and economic performance of the store (see Sorensen ‘372 at Abstract and [0023]). Therefore, a person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success.
Claims 14-18 are rejected under 35 U.S.C. 103 as being unpatentable over Gruttadauria in view of Sorensen further in view of Sharma as applied to claim 1, further in view of Lu et al., US 2023/0133891 A1 (hereinafter “Lu”) further in view of Yang et al., US 2023/0245216 A1 (hereinafter “Yang”).
Regarding claim 14, Gruttadauria in view of Sorensen further in view of Sharma discloses the goods display assistance system according to claim 1 (see above), wherein the at least one processor (Gruttadauria, FIG. 1, CPUs 122) is further configured to execute the instructions (Gruttadauria, [0029] and [0031]) to:
However, Gruttadauria in view of Sorensen further in view of Sharma does not explicitly disclose score the predetermined goods display based on the movement of the line of sight, and output a score of the predetermined goods display.
In the same field of endeavor, Lu discloses score based on the movement of the line of sight (Lu, FIGS. 2-3 and [0039]-[0046], score of product is dependent on time spent viewing a product by tracking gaze time), and output a score (Lu, FIGS. 2-3 and [0039]-[0046], outputted score is then compared to one or more thresholds to determine placement, orientation, etc.)
Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to modify the modifiable goods display system of Gruttadauria in view of Sorensen further in view of Sharma to incorporate scoring component of Lu because the references are within the same field of endeavor, arrangement of goods in a virtual environment based on viewing direction data. The motivation to combine these references would have been to refresh shelves, orient products to increase sales and improve ease of finding products users desire (Lu at [0013]-[0014]). Therefore, a person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success.
However, Gruttadauria in view of Sorensen further in view of Sharma further in view of Lu does not explicitly disclose the score is to score the predetermined goods display; and outputting the score of the predetermined goods display.
In the same field of endeavor, Yang discloses a display user interface system (FIG. 1 planogram system 100) to score the predetermined goods display (Yang at [0052]-[0055] and [0059] and FIG. 3, at [0124]-0127] and [0134]-0138] determination of score and taking into consideration user input at [0077] and [0102] and [0160]); and outputting the score of the predetermined goods display (Yang, FIG. 3, at [0052]-[0054] group of items ranking, [0124]-0127] and [0134]-0138] determination of score used in ranking the display).
Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to modify the modifiable goods display system of Gruttadauria in view of Sorensen further in view of Sharma further in view of Lu to incorporate goods display scoring of Yang because the references are within the same field of endeavor, arrangement of goods in a shelf system. The motivation to combine these references would have been to optimize and customize placement of goods while taking into account physical constraints (see Yang at [0003]). Therefore, a person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success.
Regarding claim 15, Gruttadauria in view of Sorensen further in view of Sharma further in view of Lu further in view of Yang discloses the goods display assistance system according to claim 14 (see above), wherein the at least one processor (Gruttadauria, FIG. 1, CPUs 122) is further configured to execute the instructions (Gruttadauria, [0029] and [0031]) to: score each of a plurality of goods displays (Yang at [0052]-[0055] and [0059] and FIG. 3, at [0124]-0127] and [0134]-0138] determination of score and taking into consideration user input at [0077] and [0102] and [0160]), and
output the plurality of goods displays in order of scores of the plurality of goods displays (Yang, [0052]-[0053] and [0059], ranking each of the group of items; FIG. 3, at [0118]-[0119] and [0124]-0127] and [0134]-0138] scores are used in ranking the groupings).
Regarding claim 16, Gruttadauria in view of Sorensen further in view of Sharma further in view of Lu further in view of Yang discloses the goods display assistance system according to claim 15 (see above), wherein the at least one processor (Gruttadauria, FIG. 1, CPUs 122) is further configured to execute the instructions (Gruttadauria, [0029] and [0031]) to:
identify a time during which the line of sight is directed to each product in the goods display based on the movement of the line of sight (Gruttadauria, [0059] time spent exploring a targeted portion of an aisle), and
score the goods display (Yang at [0052]-[0055] and [0059] and FIG. 3, at [0124]-0127] and [0134]-0138] determination of score and taking into consideration user input at [0077] and [0102] and [0160]) based on the time during which the line of sight is directed (Lu, FIGS. 2-3 and [0039]-[0046], score of product is dependent on time spent viewing a product by tracking gaze time; noting it would be obvious to one of ordinary skill to incorporate eye tracking as disclosed by Lu, as an input in determination of the score of a goods display as disclosed by Yang, for the clear benefit of incorporating available user data to improve efficiency of the shopping experience).
Regarding claim 17, Gruttadauria in view of Sorensen further in view of Sharma further in view of Lu further in view of Yang discloses the goods display assistance system according to claim 16 (see above), wherein the at least one processor (Gruttadauria, FIG. 1, CPUs 122) is further configured to execute the instructions (Gruttadauria, [0029] and [0031]) to:
score the predetermined goods display (Yang at [0052]-[0055] and [0059] and FIG. 3, at [0124]-0127] and [0134]-0138] determination of score and taking into consideration user input at [0077] and [0102] and [0160]) in such a way that the longer the time during which the line of sight is directed to a predetermined product, the higher a score is (Lu, FIGS. 2-3 [0039]-[0048] and [0059]-[0060] describing time spent – eye gaze time spent can be used to determine the first product should be displayed in the first position).
Regarding claim 18, Gruttadauria in view of Sorensen further in view of Sharma further in view of Lu further in view of Yang discloses the goods display assistance system according to claim 14 (see above), wherein the at least one processor (Gruttadauria, FIG. 1, CPUs 122) is further configured to execute the instructions (Gruttadauria, [0029] and [0031]) to:
identify the number of times the line of sight is directed to each product in the goods display based on the movement of the line of sight (Lu, FIGS. 2-3 [0039]-[0048] and [0059]-[0060] describing number of times interacted with a product including eye-gaze time, which can be used to determine the first product should be displayed in the first position – and scored as such), and
score the predetermined goods display (Yang at [0052]-[0055] and [0059] and FIG. 3, at [0124]-0127] and [0134]-0138] determination of score and taking into consideration user input at [0077] and [0102] and [0160], noting each item is scored at [0022] and later aggregated at [0059]) based on the identified number of times (Lu, FIGS. 2-3 [0039]-[0048] and [0059]-[0060] describing number of times interacted with a product including eye-gaze time, which can be used to determine the first product should be displayed in the first position – and scored as such; noting it would be obvious to one of ordinary skill to incorporate eye tracking as disclosed by Lu, as an input in determination of the score of a goods display as disclosed by Yang, for the clear benefit of incorporating available user data to improve efficiency of the shopping experience).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Sharma et al., US 8,706,544 B1: Abstract and FIGS. 4-6 collecting customer data based on movement within the store to determine demographic characterizations to customize content;
Hod et al., US 2008/0208715 A1: Abstract, and FIGS. 4-8 generally describing personalization of virtual store organization;
Sorensen US 2018/0225744 A1: Abstract and FIGS. 2-3 with product placement based on visibility metrics;
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SARVESH J. NADKARNI whose telephone number is (571)270-7562. The examiner can normally be reached 8AM-5PM M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Benjamin C. Lee can be reached at (571)272-2963. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SARVESH J NADKARNI/Examiner, Art Unit 2629