DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This action is in reply to the communications filed on September 30, 2025. The Applicant’s Amendment and Request for Reconsideration has been received and entered.
Claims 1-25 are currently pending. Claims 19-25 have been withdrawn in response to the restriction requirement. Claims 1-18 have been examined in this application. Claims 1, 17, and 18 have been amended.
Response to Arguments
Applicant’s amendments necessitated the new grounds of rejection.
Regarding the rejection of claims 1-20 under 35 USC 101, Applicant’s arguments have been fully considered but they are not persuasive for the reasons set forth infra.
Applicant’s remaining arguments have been fully considered but they are not persuasive. Particularly, Applicant’s arguments are directed to the instantly amended claims, and are thus moot in view of the new grounds of rejection.
Additionally, the Examiner respectfully argues that it is the combination of Anadure/Chuah that teaches “guiding, using AR, the user to capture one or more images of the user's space, with the mobile device, at one or more respective camera positions and/or orientations relative to the indicated position of the proxy product model in the user's space.” Specifically, Chuah teaches guiding, using AR, the user to capture one or more images of the user's space, with the mobile device, at one or more respective camera positions and/or orientations relative to the indicated position of the . . . model in the user's space (Chuah: Fig. 14A-14J (We’ll have you take panoramas from the corners of you room [Wingdings font/0xE0] Stand Here [Wingdings font/0xE0] Aim Here [Wingdings font/0xE0] Take a panorama”); Col. 80, Ln. 15 through Col. 82, Ln. 40 (During such movement of the user device toward an image capture location, position and orientation data of the user device within the local coordinate frame may be tracked, upon receiving user consent, in order to determine whether the user device has moved toward the defined or selected image capture location within the room. . . FIG. 14E illustrates an example user interface screen related to further beginning the image capture process using panorama paths, upon receiving user consent. For example, the example user interface screen may include an arrow 1410 and a textual cue 1412 related to orienting a field of view of an imaging sensor of the user device at an image capture location associated with the panorama path and defined within the local coordinate frame of the room. As described herein, a sweep starting point, a direction of sweep, and a sweep ending point may be defined and associated with the image capture location. Thus, the arrow 1410 and the textual cue 1412 may guide the user to orient a field of view of the imaging sensor of the user device toward a sweep starting point to capture images at the image capture location, upon receiving user consent. In addition, imaging data captured via an imaging sensor of the user device may be presented via a display of the user device.); Figs 9A-9L). Anadure further teaches . . . , using AR, the user to capture one or more images of the user's space, with the mobile device, at one or more respective camera positions and/or orientations relative to the indicated position of the proxy product model in the user's space; (Anadure: [0056] (The shape size determination module 210, for instance, may first determine dimensions of the physical environment 108. This may be determined in a variety of ways. In one example, the dimensions of the physical environment 108 are determined from the live stream of digital images 112 based on parallax, e.g., in the object that are closer to the digital camera 110 exhibit greater amounts of movement that objects that are further away from the digital camera 110, e.g., in successive and/or stereoscopic images.)). Therefore, in response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986).
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-18 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter.
Step 1. When considering subject matter eligibility under 35 U.S.C. 101, it must be determined whether the claim is directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter.
Step 2A – Prong One. If the claims fall within one of the statutory categories, it must then be determined whether the claims recite an abstract idea, law of nature, or natural phenomenon.
Step 2A – Prong Two. If the claims recite an abstract idea, law of nature, or natural phenomenon, it must then be determined whether the claims recite additional elements that integrate the judicial exception into a practical application. If the claims do not recite additional elements that integrate the judicial exception into a practical application, then the claims are directed to a judicial exception.
Step 2B. If the claims are directed to a judicial exception, it must be evaluated whether the claims recite additional elements that amount to an inventive concept (i.e. “significantly more”) than the recited judicial exception.
In the instant case, claims 1-16 are directed to a process; claim 17 is directed to a machine; and claim 18 is directed to a manufacture.
A claim “recites” an abstract idea if there are identifiable limitations that fall within at least one of the groupings of abstract ideas enumerated in MPEP 2106. In the instant case, claim 1, and similarly claims 17 and 18, recite the steps of: receiving information indicating a position in the user's space at which to place a proxy product model serving as a proxy for a product of a first type; generating a visualization of the proxy product model, at the indicated position, in the user's space; receiving information indicating dimensions for the proxy product model; guiding the user to capture one or more images of the user's space at one or more respective positions and/or orientations relative to the indicated position of the proxy product model in the user's space; obtaining, based on the information indicating the dimensions for the proxy product model and the one or more images of the user's space, at least one 2D image of at least one product of the first type in the user's space; and displaying the at least one 2D image of the at least one product -- these claim limitations set forth certain methods of organizing human activity, particularly commercial interactions including advertising, marketing, and sales activities/behaviors.
Further, the limitations of the claims are not indicative of integration into a practical application. Taking the independent claim elements separately, the additional elements of performing the steps using at least one computer hardware processor, by a mobile device having a camera, using augmented reality (AR), with the mobile device, and with at least one non-transitory computer-readable storage medium storing processor- executable instructions that, when executed by the at least one computer hardware processor, cause the at least one computer hardware processor to perform a method -- merely implement the abstract idea on a computer environment. Additionally, taking the dependent claim elements separately, the additional elements of performing the steps via a webpage also merely implement the abstract idea on a computer environment. Considered in combination, the steps of Applicant’s method add nothing that is not already present when the steps are considered separately.
Thus, claims 1-18 are directed to an abstract idea.
Regarding the independent claims, the technical elements of performing the steps using at least one computer hardware processor, by a mobile device having a camera, using augmented reality (AR), with the mobile device, and with at least one non-transitory computer-readable storage medium storing processor- executable instructions that, when executed by the at least one computer hardware processor, cause the at least one computer hardware processor to perform a method -- merely implement the abstract idea on a computer environment. Additionally, regarding the dependent claims, the technical elements of performing the steps via a webpage also merely implement the abstract idea on a computer environment.
When considering the elements and combinations of elements, the claim(s) as a whole, do not amount to significantly more than the abstract idea itself. This is because the claims do not amount to an improvement to another technology or technical field; the claims do not amount to an improvement to the functioning of a computer itself; the claims do not move beyond a general link of the use of an abstract idea to a particular technological environment; the claims merely amounts to the application or instructions to apply the abstract idea on a computer; or the claims amounts to nothing more than requiring a generic computer to perform generic computer functions that are well-understood, routine and conventional activities previously known to the industry.
The analysis above applies to all statutory categories of invention. Accordingly, claims 1-18 are rejected as ineligible for patenting under 35 USC 101 based upon the same rationale.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-18 are rejected under 35 U.S.C. 103 as being unpatentable over Anadure (US PGP 2020/0273195) in view of Chuah (US Pat. No. 10,643,344).
As per claim 1, Anadure teaches method for generating two-dimensional (2D) product images in a user's space, the method comprising:
using at least one computer hardware processor to perform:
receiving, by a mobile device having a camera, information indicating a position in the user's space at which to place a proxy product model serving as a proxy for a product of a first type; (Anadure: Fig. 1; [0036]; Figs. 3-6; [0048]-[0061] (To begin in this example, a computing device 102 of a user captures a live stream of digital images 112 of a physical environment 108, in which, the computing device is disposed through use of a digital camera 110 (block 702). FIG. 3, for instance, depicts an example 300 of user interaction with the computing device 102 through use of first, second, and third stages 302, 304, 306. At the first stage 302, the live stream of digital images 112 is output in a user interface 120 by a display device 122 of the computing device 102. . . . The AR manager module 116, for instance, may output an option 308 in the user interface 120 to receive a search input. . . . the text “kitchen appliances” is received as a user input. A shape selection module 202 is then employed by the AR manager module 116 to receive a user input to select a geometric shape 204 (block 704). . . . At the first stage 402, the selected geometric shape 408 is displayed by a display device 122 of the computing device 102 as augmented reality digital content included as part of the live stream of digital images 112 (block 706). . . . This may be performed in a variety of ways. In a first example, the search result 222 includes a ranked listing of items of digital content 124, e.g., as a listing of different appliances based on “how well” these items satisfy the search input 218 and/or the size 212 of the search query 216. In another example 600 as illustrated in FIG. 6, the search result 222 is displayed as augmented reality digital content with the user interface 120 as part of the live stream of digital images 112.))
generating, using augmented reality (AR), a visualization of the proxy product model, at the indicated position, in the user's space; (Anadure: Figs. 3-4; [0048]-[0060] (At the first stage 402, the selected geometric shape 408 is displayed by a display device 122 of the computing device 102 as augmented reality digital content included as part of the live stream of digital images 112 (block 706). As a result, the geometric shape augments the user's view of the live stream of digital images 112 captured of the physical environment 108 to appear “as if it is really there.”))
receiving information indicating dimensions for the proxy product model; (Anadure: Figs. 3-4; [0050]-[0055] (In a three dimensional example, the shape may specify a volume, e.g., the area on the surface as well as a height. . . . A user input is also received via the display device 122 as manipulating the geometric shape as part of the live stream of digital images (block 708). The selected shape 204, for instance, may be communicated by the shape selection module 202 to a shape manipulation module 206. The shape manipulation module 206 is representative of functionality of the AR manager module 116 that accepts user inputs to modify a location and/or size of the geometric shape 308, e.g., height, width, depth or other dimensions of the selected geometric shape 408. . . . In one example, the geometric shape 408 is output at the first stage 402 of FIG. 4 along with instructions to “size and positions shape” 410. A user input is received at the second stage 404 to change one or more dimensions of the shape 408 as well as position of the shape within a view of the live stream of digital images 112 in the user interface 120 as captured from the physical environment 108. The user inputs, for instance, may select edges or points along a border of the shape to resize dimensions of the shape via touchscreen functionality of the display device 122.))
. . . , using AR, the user to capture one or more images of the user's space, with the mobile device, at one or more respective camera positions and/or orientations relative to the indicated position of the proxy product model in the user's space; (Anadure: [0056] (The shape size determination module 210, for instance, may first determine dimensions of the physical environment 108. This may be determined in a variety of ways. In one example, the dimensions of the physical environment 108 are determined from the live stream of digital images 112 based on parallax, e.g., in the object that are closer to the digital camera 110 exhibit greater amounts of movement that objects that are further away from the digital camera 110, e.g., in successive and/or stereoscopic images.))
obtaining, based on the information indicating the dimensions for the proxy product model and the one or more images of the user's space, at least one 2D image of at least one product of the first type in the user's space; and (Anadure: Figs. 3-6; [0048]-[0061] (Once the dimensions of the physical environment are determined, dimensions of the manipulated shape 208 are then determined with respect to those dimensions. Returning again to FIG. 4, for instance, dimensions of the countertop may be determined, and from this, dimensions of the manipulated shape 208 as positioned on the countertop based on those dimensions. In this way, the size 212 calculated by the shape size determination module 210 specifies a relationship of the manipulated shape 208 to a view of the physical environment 108 via the live stream of digital images 112 as augmented reality digital content. The size 212 is then communicated from the shape size determination module 210 to a search query generation module 214 to generate a search query 216 that includes a search input 218 and the size 212 (block 712). The search input 218, for instance, may include the text used to initiate the sizing technique above, e.g., “kitchen appliances.” The search input 218, along with the size 212, form a search query 216 that is used as a basis to perform a search. . . . The service provider system 104 includes a search manager module 220 that search digital content 124 that relates to kitchen appliances 502 as specified by the search input 218 and that has a size that approximates and/or is less than the size 212 specified by the search query 216, e.g., within a threshold amount such as a defined percentage. This is used to generate a search result 222 that is then communicated back to the computing device 102. . . . In another example 600 as illustrated in FIG. 6, the search result 222 is displayed as augmented reality digital content with the user interface 120 as part of the live stream of digital images 112.); [0006] (object included in a digital image (e.g., a two-dimensional digital image); [0050]; [0055]; [0068])
displaying the at least one 2D image of the at least one product. (Anadure: Fig. 6; [0060]-[0061] (In another example 600 as illustrated in FIG. 6, the search result 222 is displayed as augmented reality digital content with the user interface 120 as part of the live stream of digital images 112.); [0046] (FIG. 5 depicts a system 500 in an example implementation of performing a search using the search query generated by the system 200 of FIG. 2. FIG. 6 depicts an example implementation 600 of output of a search result of the search performed in FIG. 5 as augmented reality digital content as part of a live feed of digital images.); [0028] (Results of the search are then output by the computing device, e.g., as a ranked listing, as augmented reality digital content as replacing the geometric shape, and so forth. A user, for instance, may “swipe through” the search results viewed as augmented reality digital content as part of the live stream of digital images to navigate sequentially through the search results to locate a particular product of interest, which may then be purchased from the service provider system.); [0006] (object included in a digital image (e.g., a two-dimensional digital image); [0050]; [0055]; [0068])
Anadure does not explicitly disclose the following known technique which is taught by Chuah:
guiding, using AR, the user to capture one or more images of the user's space, with the mobile device, at one or more respective camera positions and/or orientations relative to the indicated position of the . . . model in the user's space (Chuah: Fig. 14A-14J (We’ll have you take panoramas from the corners of you room [Wingdings font/0xE0] Stand Here [Wingdings font/0xE0] Aim Here [Wingdings font/0xE0] Take a panorama”); Col. 80, Ln. 15 through Col. 82, Ln. 40 (During such movement of the user device toward an image capture location, position and orientation data of the user device within the local coordinate frame may be tracked, upon receiving user consent, in order to determine whether the user device has moved toward the defined or selected image capture location within the room. . . FIG. 14E illustrates an example user interface screen related to further beginning the image capture process using panorama paths, upon receiving user consent. For example, the example user interface screen may include an arrow 1410 and a textual cue 1412 related to orienting a field of view of an imaging sensor of the user device at an image capture location associated with the panorama path and defined within the local coordinate frame of the room. As described herein, a sweep starting point, a direction of sweep, and a sweep ending point may be defined and associated with the image capture location. Thus, the arrow 1410 and the textual cue 1412 may guide the user to orient a field of view of the imaging sensor of the user device toward a sweep starting point to capture images at the image capture location, upon receiving user consent. In addition, imaging data captured via an imaging sensor of the user device may be presented via a display of the user device.); Figs 9A-9L)
This known technique is applicable to the method of Anadure as they both share characteristics and capabilities, namely, they are directed to capturing user’s environment for augmented reality display.
One of ordinary skill in the art at the time of filing would have recognized that applying the known technique of Chuah would have yielded predictable results and resulted in an improved method. It would have been recognized that applying the technique of Chuah to the teachings of Anadure would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such guiding features into similar methods. Further, applying the guiding, using AR, the user to capture one or more images of the user's space, with the mobile device, at one or more respective camera positions and/or orientations relative to the indicated position, in the user's space to the teachings of Anadure would have been recognized by those of ordinary skill in the art as resulting in an improved method that would allow generation of augmented reality environments that simulate actual or real-world environments, with reduced time, cost, and/or specialized training and skills (Chuah: Col. 1, Lns. 5-24)
As per claim 2, Anadure/Chuah teach further comprising:
detecting a surface in the user's space, wherein the surface is near the indicated position in the user's space, (Anadure: Figs. 3-4; [0056]-[0057] (The shape size determination module 210, for instance, may first determine dimensions of the physical environment 108. This may be determined in a variety of ways. In one example, the dimensions of the physical environment 108 are determined from the live stream of digital images 112 based on parallax, e.g., in the object that are closer to the digital camera 110 exhibit greater amounts of movement that objects that are further away from the digital camera 110, e.g., in successive and/or stereoscopic images. . . . Returning again to FIG. 4, for instance, dimensions of the countertop may be determined, and from this, dimensions of the manipulated shape 208 as positioned on the countertop based on those dimensions.))
wherein generating the visualization of the proxy product model comprises generating the visualization of the proxy product model positioned on the surface. (Anadure: Figs. 3-4; [0056]-[0057] (Returning again to FIG. 4, for instance, dimensions of the countertop may be determined, and from this, dimensions of the manipulated shape 208 as positioned on the countertop based on those dimensions. In this way, the size 212 calculated by the shape size determination module 210 specifies a relationship of the manipulated shape 208 to a view of the physical environment 108 via the live stream of digital images 112 as augmented reality digital content.))
As per claim 3, Anadure/Chuah teach wherein the surface is a horizontal surface or an upright surface. (Anadure: Figs. 3-4 (displaying horizontal countertop surface); [0056]-[0057] (countertop))
As per claim 4, Anadure/Chuah teach:
receiving user input indicating a selection of the first type of product; and (Anadure: Figs. 3-4; [0048]-[0050] (The AR manager module 116, for instance, may output an option 308 in the user interface 120 to receive a search input. The search input may be provided in a variety of ways, such as input directly via text, converted using speech-to-text recognition based on a spoken utterance, and so forth. In the illustrated example of the first stage 302, the text “kitchen appliances” is received as a user input.))
identifying the proxy product model based on the selection of the first type of product. (Anadure: Figs. 3-4; [0048]-[0050] (The AR manager module 116, for instance, may output an option 308 in the user interface 120 to receive a search input. The search input may be provided in a variety of ways, such as input directly via text, converted using speech-to-text recognition based on a spoken utterance, and so forth. In the illustrated example of the first stage 302, the text “kitchen appliances” is received as a user input. A shape selection module 202 is then employed by the AR manager module 116 to receive a user input to select a geometric shape 204 (block 704). Continuing with the example 300 of FIG. 3, an option is output in the user interface by the shape selection module 202 to “select shape to calculate size” 310 in response to receipt of the search input. Selection of this option at the second stage 304 causes output of a plurality of predefined geometric shapes (e.g., two or three dimensions) in the user interface 120 at the third stage 306. From these options, a user input is received to select a particular shape, e.g., via touchscreen functionality by a user's hand 312, a cursor control device, spoken utterance, and so on.))
As per claim 5, Anadure/Chuah teach wherein receiving the information indicating dimensions for the proxy product model comprises receiving information indicating width, height, and/or depth of a bounding box of the proxy product model. (Anadure: Figs. 3-4; [0050]-[0055] (In a three dimensional example, the shape may specify a volume, e.g., the area on the surface as well as a height. . . . A user input is also received via the display device 122 as manipulating the geometric shape as part of the live stream of digital images (block 708). The selected shape 204, for instance, may be communicated by the shape selection module 202 to a shape manipulation module 206. The shape manipulation module 206 is representative of functionality of the AR manager module 116 that accepts user inputs to modify a location and/or size of the geometric shape 308, e.g., height, width, depth or other dimensions of the selected geometric shape 408.))
As per claim 6, Anadure/Chuah teach wherein the guiding comprises:
guiding the user to capture a first image of the user's space by:
guiding the user to be at a first position, and (Chuah: Fig. 14A-14J (We’ll have you take panoramas from the corners of you room [Wingdings font/0xE0] Stand Here); Col. 79, Ln. 15 through Col. 81, Ln. 8 (For example, the example user interface screen may include the textual cue 1408 related to moving to an image capture location associated with the panorama path and defined within the local coordinate frame of the room. . . . In other embodiments, an image capture location within the room at which to begin the image capture process may be selected based on position and orientation data of the user device within the local coordinate frame of the room, e.g., an image capture location closest to a current position and orientation of the user device may be selected to begin the image capture process. In addition, imaging data captured via an imaging sensor of the user device may be presented via a display of the user device. Further, the textual cue 1408 may be presented via the display of the user device to guide the user to move the user device toward the image capture location that is generated for presentation at a particular location within the local coordinate frame. . . . During such movement of the user device toward an image capture location, position and orientation data of the user device within the local coordinate frame may be tracked, upon receiving user consent, in order to determine whether the user device has moved toward the defined or selected image capture location within the room; Figs 9A-9L)
guiding the user, when at the first position, to position the camera at a first height and to orient the camera of the mobile device in a first orientation. (Chuah: Fig. 14A-14J (We’ll have you take panoramas from the corners of you room [Wingdings font/0xE0] Stand Here [Wingdings font/0xE0] Aim Here [Wingdings font/0xE0] Take a panorama”); Col. 80, Ln. 15 through Col. 84, Ln. 35 (During such movement of the user device toward an image capture location, position and orientation data of the user device within the local coordinate frame may be tracked, upon receiving user consent, in order to determine whether the user device has moved toward the defined or selected image capture location within the room. . . FIG. 14E illustrates an example user interface screen related to further beginning the image capture process using panorama paths, upon receiving user consent. For example, the example user interface screen may include an arrow 1410 and a textual cue 1412 related to orienting a field of view of an imaging sensor of the user device at an image capture location associated with the panorama path and defined within the local coordinate frame of the room. As described herein, a sweep starting point, a direction of sweep, and a sweep ending point may be defined and associated with the image capture location. Thus, the arrow 1410 and the textual cue 1412 may guide the user to orient a field of view of the imaging sensor of the user device toward a sweep starting point to capture images at the image capture location, upon receiving user consent. In addition, imaging data captured via an imaging sensor of the user device may be presented via a display of the user device.); Figs 9A-9L)
The motivation for applying the known techniques of Chuah to the teachings of Anadure is the same as that set forth above, in the rejection of Claim 1.
As per claim 7, Anadure/Chuah teach wherein guiding the user to the first position comprises displaying a visual indicator for the position to the user using AR. (Chuah: Fig. 14A-14J (displaying “Stand Here” and arrow visual indicators); Col. 79, Ln. 15 through Col. 81, Ln. 8 (For example, the example user interface screen may include an arrow 1406 and a textual cue 1408 related to moving to an image capture location associated with the panorama path and defined within the local coordinate frame of the room. . . . Further, the arrow 1406 and the textual cue 1408 may be presented via the display of the user device to guide the user to move the user device toward the image capture location that is generated for presentation at a particular location within the local coordinate frame. For example, the arrow 1406 may be generated, presented, and updated to point in a particular direction, e.g., toward the image capture location, based on position and orientation data of the user device within the local coordinate frame, and the particular location or position of the image capture location within the local coordinate frame. Further, the textual cue 1408 may be generated, presented, and updated to follow or remain adjacent to the presented arrow 1406. Moreover, the arrow 1406 and the textual cue 1408 may be presented generally closer to the floor of the room (or other surface having identifiable features) in order to avoid tracking loss by the imaging sensor of the user device. . . . Further, the textual cue 1408 may be presented via the display of the user device to guide the user to move the user device toward the image capture location that is generated for presentation at a particular location within the local coordinate frame.; Figs 9A-9L)
The motivation for applying the known techniques of Chuah to the teachings of Anadure is the same as that set forth above, in the rejection of Claim 1.
As per claim 8, Anadure/Chuah teach wherein displaying the visual indicator comprises displaying the visual indicator on a surface on which the user is to stand. (Chuah: Fig. 14B (displaying “Stand Here” and arrow visual indicators on floor); Fig. 14D (displaying “Stand Here” visual indicator on chair to stand on); (Col. 79, Ln. 15 through Col. 81, Ln. 8 (Further, the arrow 1406 and the textual cue 1408 may be presented via the display of the user device to guide the user to move the user device toward the image capture location that is generated for presentation at a particular location within the local coordinate frame. For example, the arrow 1406 may be generated, presented, and updated to point in a particular direction, e.g., toward the image capture location, based on position and orientation data of the user device within the local coordinate frame, and the particular location or position of the image capture location within the local coordinate frame. Further, the textual cue 1408 may be generated, presented, and updated to follow or remain adjacent to the presented arrow 1406. Moreover, the arrow 1406 and the textual cue 1408 may be presented generally closer to the floor of the room (or other surface having identifiable features) in order to avoid tracking loss by the imaging sensor of the user device))
The motivation for applying the known techniques of Chuah to the teachings of Anadure is the same as that set forth above, in the rejection of Claim 1.
As per claim 9, Anadure/Chuah teach wherein guiding the user to orient the camera in the first orientation comprises guiding the user, using AR, to orient the camera to have a pitch, roll, and/or yaw angle, each of which is a specified value or any value occurring within a specified range of values. (Chuah: Fig. 14A-14J (We’ll have you take panoramas from the corners of you room [Wingdings font/0xE0] Stand Here [Wingdings font/0xE0] Aim Here [Wingdings font/0xE0] Take a panorama”); Col. 81, Ln. 8 through Col. 88, Ln. 14 (FIG. 14E illustrates an example user interface screen related to further beginning the image capture process using panorama paths, upon receiving user consent. For example, the example user interface screen may include an arrow 1410 and a textual cue 1412 related to orienting a field of view of an imaging sensor of the user device at an image capture location associated with the panorama path and defined within the local coordinate frame of the room. As described herein, a sweep starting point, a direction of sweep, and a sweep ending point may be defined and associated with the image capture location. Thus, the arrow 1410 and the textual cue 1412 may guide the user to orient a field of view of the imaging sensor of the user device toward a sweep starting point to capture images at the image capture location, upon receiving user consent. In addition, imaging data captured via an imaging sensor of the user device may be presented via a display of the user device. . . . During sweep of the field of view of the imaging sensor, the arrow 1418 may be generated to present a direction of sweep, e.g., left-to-right, as shown in FIG. 14H. In addition, the image capture progress bar or block 1416 may be generated, presented, and updated to indicate progress of image capture during the sweep of the user device at the image capture location, upon receiving user consent. Further, the image capture progress bar or block 1416 and/or the arrow 1418 may be presented with a first size, shape, orientation, thickness, color, transparency, and/or other visual characteristic during successful sweep of the field of view of the imaging sensor and corresponding capture of imaging data, e.g., vertical angle or orientation, sweep rate, sweep movement, and/or other aspects of the sweep movement are within acceptable thresholds or ranges, and the image capture progress bar or block 1416 and/or the arrow 1418 may be presented with a second size, shape, orientation, thickness, color, transparency, and/or other visual characteristic during unsuccessful sweep of the field of view of the imaging sensor and corresponding capture of imaging data, e.g., vertical angle or orientation, sweep rate, sweep movement, and/or other aspects of the sweep movement are outside acceptable thresholds or ranges.); Figs 9A-9L); Examiner notes panorama path uses yaw angle.)
The motivation for applying the known techniques of Chuah to the teachings of Anadure is the same as that set forth above, in the rejection of Claim 1.
As per claim 10, Anadure/Chuah teach wherein the guiding further comprises:
guiding the user to capture one or more additional images of the user's space by:
guiding the user to one or more additional positions, and (Chuah: Figs. 14A-14P (We’ll have you take panoramas from the corners of you room [Wingdings font/0xE0] Stand Here); Col. 87, Ln. 25 through Col. 90, Ln. 11 (FIG. 14M illustrates an example user interface screen related to continuing the image capture process at an additional (or second) image capture location of the panorama path, upon receiving user consent. For example, the example user interface screen may include a textual cue 1408 (and possibly an arrow, as described herein) related to moving to an additional image capture location associated with the panorama path and defined within the local coordinate frame of the room. In some embodiments, an additional image capture location within the room at which to continue the image capture process may be defined or determined according to a direction or order of traversal during the panorama path generation process. . . . Further, the textual cue 1408 may be presented via the display of the user device to guide the user to move the user device toward the additional image capture location that is generated for presentation at a particular location within the local coordinate frame. For example, the textual cue 1408 may be generated, presented, and updated at a particular location or position associated with the additional image capture location, based on position and orientation data of the user device within the local coordinate frame, and the particular location or position of the additional image capture location within the local coordinate frame. Moreover, the textual cue 1408 may be presented generally closer to the floor of the room (or other surface having identifiable features) in order to avoid tracking loss by the imaging sensor of the user device.; Figs 9A-9L)
at each particular position of the one or more additional positions, guiding the user, when at the particular position, to orient the camera of the mobile device in a specified orientation for the particular position. (Chuah: Figs. 14A-14P); Col. 87, Ln. 25 through Col. 90, Ln. 11 (In addition, the example user interface screen may also include a panorama path traversal progress indicator 1426, which may be generated to present indications related to the number of image capture locations of the panorama path, and to present indications related to progress of traversal and image capture at each of the image capture locations of the panorama path. As shown in FIG. 14M, image capture may be completed at one image capture location, and three additional image capture locations of the panorama path may remain to be completed, upon receiving user consent. With respect to the additional (or second) image capture location described with respect to FIG. 14M, the various steps, processes, and operations described herein with respect to FIGS. 14B-14L may be substantially repeated in order to traverse to the additional image capture location, orient a field of view of an imaging sensor of the user device at the additional image capture location, and sweep the field of view of the imaging sensor at the additional image capture location to capture imaging data ))
The motivation for applying the known techniques of Chuah to the teachings of Anadure is the same as that set forth above, in the rejection of Claim 1.
As per claim 11, Anadure/Chuah teach wherein obtaining the at least one image of a product of the first type in the user's space comprises:
sending, from the mobile device via at least one communication network to at least one computer, (1) the information indicating the position in the user's space at which to place a proxy product model, (2) the information indicating the dimensions for the proxy product model, and (3) the one or more images of the user's space; and (Anadure: Figs. 1-2; [0034]-[0043] (The illustrated environment 100 includes a computing device 102 that is communicatively coupled to a service provider system 104 via a network 106.); Figs. 3-6; [0048]-[0061] (To begin in this example, a computing device 102 of a user captures a live stream of digital images 112 of a physical environment 108, in which, the computing device is disposed through use of a digital camera 110 (block 702). . . In a three dimensional example, the shape may specify a volume, e.g., the area on the surface as well as a height. . . . As a result, the geometric shape augments the user's view of the live stream of digital images 112 captured of the physical environment 108 to appear “as if it is really there.” A user input is also received via the display device 122 as manipulating the geometric shape as part of the live stream of digital images (block 708). The selected shape 204, for instance, may be communicated by the shape selection module 202 to a shape manipulation module 206. The shape manipulation module 206 is representative of functionality of the AR manager module 116 that accepts user inputs to modify a location and/or size of the geometric shape 308, e.g., height, width, depth or other dimensions of the selected geometric shape 408. . . . In one example, the geometric shape 408 is output at the first stage 402 of FIG. 4 along with instructions to “size and positions shape” 410. A user input is received at the second stage 404 to change one or more dimensions of the shape 408 as well as position of the shape within a view of the live stream of digital images 112 in the user interface 120 as captured from the physical environment 108. The user inputs, for instance, may select edges or points along a border of the shape to resize dimensions of the shape via touchscreen functionality of the display device 122. . . . The shape size determination module 210, for instance, may first determine dimensions of the physical environment 108. This may be determined in a variety of ways. In one example, the dimensions of the physical environment 108 are determined from the live stream of digital images 112 based on parallax, e.g., in the object that are closer to the digital camera 110 exhibit greater amounts of movement that objects that are further away from the digital camera 110, e.g., in successive and/or stereoscopic images.))
receiving, by the mobile device via the at least one communication network from the at least one computer, the at least one image of the at least one product of the first type in the user's space. (Anadure: Figs. 1-2; [0034]-[0043] (The illustrated environment 100 includes a computing device 102 that is communicatively coupled to a service provider system 104 via a network 106.); [0046] (FIG. 5 depicts a system 500 in an example implementation of performing a search using the search query generated by the system 200 of FIG. 2. FIG. 6 depicts an example implementation 600 of output of a search result of the search performed in FIG. 5 as augmented reality digital content as part of a live feed of digital images.); Figs. 5-6; [0058]-[0061] (The search input 218, for instance, may include the text used to initiate the sizing technique above, e.g., “kitchen appliances.” The search input 218, along with the size 212, form a search query 216 that is used as a basis to perform a search. . . . The service provider system 104 includes a search manager module 220 that search digital content 124 that relates to kitchen appliances 502 as specified by the search input 218 and that has a size that approximates and/or is less than the size 212 specified by the search query 216, e.g., within a threshold amount such as a defined percentage. This is used to generate a search result 222 that is then communicated back to the computing device 102. . . . In another example 600 as illustrated in FIG. 6, the search result 222 is displayed as augmented reality digital content with the user interface 120 as part of the live stream of digital images 112.))
As per claim 12, Anadure/Chuah teach wherein obtaining the at least one image of the at least one product of the first type in the user's space comprises:
identifying a plurality of candidate product images, the identifying comprising searching a catalog of images of products of the first type to identify images of products whose dimensions are compatible with the dimensions for the proxy product model and which were taken from camera orientations compatible with the camera orientations used to capture the one or more images of the user's space; and (Anadure: [0046] (FIG. 5 depicts a system 500 in an example implementation of performing a search using the search query generated by the system 200 of FIG. 2. FIG. 6 depicts an example implementation 600 of output of a search result of the search performed in FIG. 5 as augmented reality digital content as part of a live feed of digital images.); Figs. 3-6; [0048]-[0061] (Once the dimensions of the physical environment are determined, dimensions of the manipulated shape 208 are then determined with respect to those dimensions. Returning again to FIG. 4, for instance, dimensions of the countertop may be determined, and from this, dimensions of the manipulated shape 208 as positioned on the countertop based on those dimensions. In this way, the size 212 calculated by the shape size determination module 210 specifies a relationship of the manipulated shape 208 to a view of the physical environment 108 via the live stream of digital images 112 as augmented reality digital content. The size 212 is then communicated from the shape size determination module 210 to a search query generation module 214 to generate a search query 216 that includes a search input 218 and the size 212 (block 712). The search input 218, for instance, may include the text used to initiate the sizing technique above, e.g., “kitchen appliances.” The search input 218, along with the size 212, form a search query 216 that is used as a basis to perform a search. . . . The service provider system 104 includes a search manager module 220 that search digital content 124 that relates to kitchen appliances 502 as specified by the search input 218 and that has a size that approximates and/or is less than the size 212 specified by the search query 216, e.g., within a threshold amount such as a defined percentage. This is used to generate a search result 222 that is then communicated back to the computing device 102. . . . In another example 600 as illustrated in FIG. 6, the search result 222 is displayed as augmented reality digital content with the user interface 120 as part of the live stream of digital images 112.); [0027]-[0031] (User inputs, for instance, may be used to position the geometric shape at a particular location within the view of the physical environment, resize one or more dimensions of the geometric shape, orient the shape, and so forth . . . . Continuing with the previous example, a user may select a rectangular geometric shape. The rectangular geometric shape is then positioned and sized by the user with respect to a view of a countertop, on which, the kitchen appliance that is a subject of a search input is to be placed. A size of the geometric shape (e.g., dimensions) is then used as part of a search query along with text of the search input “kitchen appliance” to locate kitchen appliances that fit those dimensions. Results of the search are then output by the computing device, e.g., as a ranked listing, as augmented reality digital content as replacing the geometric shape, and so forth. A user, for instance, may “swipe through” the search results viewed as augmented reality digital content as part of the live stream of digital images to navigate sequentially through the search results to locate a particular product of interest, which may then be purchased from the service provider system.))
generating one or more product images by compositing one or more of the plurality of candidate product images with at least one of the one or more images of the user's space. (Anadure: Fig. 6; [0060]-[0061] (In another example 600 as illustrated in FIG. 6, the search result 222 is displayed as augmented reality digital content with the user interface 120 as part of the live stream of digital images 112.); [0046] (FIG. 5 depicts a system 500 in an example implementation of performing a search using the search query generated by the system 200 of FIG. 2. FIG. 6 depicts an example implementation 600 of output of a search result of the search performed in FIG. 5 as augmented reality digital content as part of a live feed of digital images.); [0028] (Results of the search are then output by the computing device, e.g., as a ranked listing, as augmented reality digital content as replacing the geometric shape, and so forth. A user, for instance, may “swipe through” the search results viewed as augmented reality digital content as part of the live stream of digital images to navigate s