DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “A data storing module” in Claim 1, “an input module” in Claim 13, “a module” in Claim 14, “a collecting module”, “processing module”, “input module”, and “module” in Claim 16, a “module”, “information collector module”, and “processing module” in Claim 17.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 17 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 17 recites the limitation "information collector module" in line 7. There is insufficient antecedent basis for this limitation in the claim.
Claim 17 recites the limitation “said predicting” in line 13-14. There is insufficient antecedent basis for this limitation in the claim.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-8, 11-12, and 15-17 are rejected under 35 U.S.C. 103 as being unpatentable over Chojnacka (US 20200342668 A1).
Regarding Claim 1, Chojnacka teaches a system (Paragraph [0020]: “In a system and method, in accordance with implementations described herein”) comprising:
an image sensor configured to generate an image sequence comprising at least one image (Paragraph [0046]: “FIG. 6 is block diagram of computing device 600 that can generate an augmented reality, or mixed reality environment, and that can provide for user interaction with virtual objects presented in a camera view, or scene, of a physical environment, in accordance with implementations described herein”);
one or more processing engines (Paragraph [0046]: “The computing device 600 may also include a processor 690”); and
A data storing module coupled to the one or more processing engines (Paragraph [0052]: “The storage device 2006 is capable of providing mass storage for the computing device 2000”; Paragraph [0050]: “Computing device 2000 includes a processor 2002, memory 2004, a storage device 2006”. Notes: the broadest reasonable interpretation of a data storing module is an entity that can store data, in which a storage device qualifies);
Wherein the data storing module includes computer-executable instructions that, when executed by the one or more processing engines, cause the one or more processing engines to perform operations (Paragraph [0052]: “A computer program product can be tangibly embodied in an information carrier. The computer program product may also contain instructions that, when executed, perform one or more methods, such as those described above. The information carrier is a computer- or machine-readable medium, such as the memory 2004, the storage device 2006, or memory on processor 2002”) including:
Displaying on the display device the image sequence of a real-world environment (Paragraph [0002]: “Augmented, or mixed reality systems, or virtual reality systems, may allow users to view scenes, for example, scenes corresponding to their physical environment, and to augment the scenes of their physical environment with virtual objects”; Paragraph [0046]: “FIG. 6 is block diagram of computing device 600 that can generate an augmented reality, or mixed reality environment, and that can provide for user interaction with virtual objects presented in a camera view, or scene, of a physical environment, in accordance with implementations described herein”; Figures 5C-5J illustrate example displays of a real-world environment);
Generating a virtual masking of one or more designable areas on the image of the real-world environment as displayed (Figure 5C-5J illustrate the process of displaying a suggested item at a particular location, which is determined based on unoccupied space and surrounding occupied space. Notes: In its broadest reasonable interpretation, a virtual masking is modifying an initial image or scene by overlaying it with some entity that obscures or covers certain elements in the initial image. Hence, the generation of suggested items qualifies as a virtual masking),
the virtual masking being generated by at least one of the real-world environment data of the image sequence (Paragraph [0019]: “In general, this document describes example approaches for modeling spatial relations between objects in an ambient, or physical, or real world environment, and for providing automatic suggestion and/or placement of augmented, or mixed reality objects in the real world environment. In an augmented reality (AR) or a mixed reality (MR) system, or a virtual reality (VR) system, in accordance with implementations described herein, the system may analyze what is visible in the real world environment, and provide for placement of three-dimensional (3D) virtual objects, or augmented/mixed reality objects, in a view of the real world environment. For example, in some implementations, the system may analyze a stream of image information, to gain a semantic understanding of 3D pose information and location information related to real objects in the real world environment, as well as object identification information related to the real objects, and other such information associated with real objects in the real world environment. The system may also take into account empty spaces between the identified real objects, to make a determination of empty space(s) in the real world environment that may be available for virtual object placement in the real world environment. With a semantic understanding of the real world environment, suggestions for virtual object(s) to be placed in the real world environment may be pertinent, or germane, or contextual, to the real world environment, thus enhancing the user's experience”),
analyzing and sensing boundary points of the real-world environment data in at least two dimensions (Paragraph [0004]: “In some implementations, detecting the physical objects may include detecting occupied spaces in the physical environment, including directing a plurality of rays from the computing device toward the physical environment as the image frames of the physical environment are captured, detecting a plurality of keypoints at points where the plurality of rays intersect with the physical objects, marking three-dimensional bounds of the physical objects in response to the detection of the plurality of keypoints, each of the plurality of keypoints being defined by a three-dimensional coordinate, and marking locations on a plane respectively corresponding to the physical objects as occupied spaces based on the marked bounds of each of the physical objects”), and
identifying, using at least said boundary points, spatial attributes of the one or more designable areas (Paragraph [0004]: “In some implementations, identifying the at least one unoccupied space may include superimposing a two-dimensional grid on the plane, the two-dimensional grid defining a plurality of tiles, projecting the plurality of keypoints onto the grid, detecting points at which the projected keypoints intersect tiles of the grid, marking the tiles, of the plurality of tiles, at which the keypoints intersect the grid, as occupied spaces, and marking remaining tiles, of the plurality of tiles, as unoccupied spaces” Notes: unoccupied spaces are considered designable areas, since they are candidates for suggested/recommended item placement),
the virtual masking being displayed at least in two-dimensions (Figure 5F-5J clearly illustrate the virtual masking displayed in at least two-dimensions),
the virtual masking further configured to present product category based on said spatial attributes (Paragraph [0020]: “In a system and method, in accordance with implementations described herein, as images are streamed through, for example, a camera of an electronic device, the image frames may be fed through an auto-completion algorithm, or model, to gain a semantic understanding of the physical, real world environment, and in particular, 3D pose and location information of real object(s) in the physical, real world environment. In some implementations, real objects in the physical, real world environment, may be identified through correlation with images in a database, including not just correlation of classes of items, but also correlation of spacing between objects. Empty spacing, for example, spacing available for placement of virtual objects, may be taken into account by transmitting beams through the detected real object(s) in a point cloud, to detect whether or not a space is occupied by a real object. In particular, the detection of unoccupied space may be used to determine spacing available for the selection and suggestion/placement of contextual virtual object(s), based on the semantic understanding of the physical real world environment”; Paragraph [0005]: “Identifying at least one suggested object for placement in the unoccupied space in the physical environment may include determining a context of the physical environment based on the identification of the detected physical objects”. Notes: In its broadest reasonable interpretation, a virtual masking is modifying an initial image or scene by overlaying it with some entity that obscures or covers certain elements in the initial image. Hence, the generation of suggested items qualifies as a virtual masking);
Transmitting, to a server, a request specifying a product category of items that have dimensions that fit the dimensions of the virtual masking of the one or more designable areas (Paragraph [0066]: “The systems and techniques described here can be implemented in a computing system that includes a back end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front end component (e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back end, middleware, or front end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network (“LAN”), a wide area network (“WAN”), and the Internet”; Paragraph [0005]: “In some implementations, the computer-implemented method may also include determining a size of each of the unoccupied spaces based on at least one of a size, a contour, or an orientation of an identified physical object occupying one of the marked occupied spaces. Identifying at least one suggested object for placement in the unoccupied space in the physical environment may include determining a context of the physical environment based on the identification of the detected physical objects, and identifying the at least one suggested object based on the determined context of the physical environment and the identified at least one unoccupied space. In some implementations, identifying the at least one suggested object may include identifying the at least one suggested object based on the determined size of the at least one unoccupied space relative to a position and an orientation of the identified physical objects. In some implementations, identifying the at least one suggested object may include identifying the at least one suggested object based on at least one of user preferences, user profile information, user browsing history, or an inventory of available items”. Notes: Transmitting a request to a back end data server is obvious in the art. Chojnacka’s system determines appropriate item categories based on several factors, including the dimensions of the designable area. Given that making requests to a back end with data is obvious in the art, Chojnacka’s system is capable of doing so, since they state that the systems and techniques that are described can be implemented in a system with a back end server and front end capable of user interaction)
Displaying, on a user interface in the display device, one or more items matching the one or more designable areas as defined (Paragraph [0024]: “In this example, the AR environment 200 may be an environment in which virtual objects may be selected and/or placed, by the system and/or by the user, allowing the user to view, interact with, and manipulate the virtual objects displayed within the AR environment 200”; Figures 5F-5J demonstrate what is displayed to a user, where items matching the context of a scene and the designable area (ex. food on the plate) can be interchanged); and
Following a selection of an item from the one or more items, displaying on the display device, a modified image sequence that shows the image sequence and one or more 3D renders of the selected items as arranged at the virtual masking (Paragraph [0024]: “In this example, the AR environment 200 may be an environment in which virtual objects may be selected and/or placed, by the system and/or by the user, allowing the user to view, interact with, and manipulate the virtual objects displayed within the AR environment 200”; Paragraph [0022]: “As noted above, in some implementations, the 3D virtual/augmented/mixed reality object(s) may be suggested, or selected, and placed, contextually, or semantically with respect to the real world environment based on identification of real objects in the real world environment, and empty, or unoccupied, or available spaces identified in the real world environment”; Figures 5F-5J demonstrate what is displayed to a user, where items matching the context of a scene and the designable area (ex. food on the plate) can be interchanged).
Chojnacka does not explicitly teach a computer vision and classification processing engine.
However, a person having ordinary skill in the art would appreciate that the use of computer vision and classification processing engines are obvious given the context of Chojnacka (Paragraph [0004]: “In some implementations, detecting the physical objects in the physical environment may include capturing, by an image sensor of the computing device, image frames of the physical environment, detecting the physical objects in the image frames captured by the image sensor, comparing the detected physical objects to images stored in a database accessible to the computing device, and identifying the physical objects based on the comparison”),
Chojnacka further teaches using the identified physical objects for generating a virtual masking via processing using detected physical objects (Paragraph [0005]: “Identifying at least one suggested object for placement in the unoccupied space in the physical environment may include determining a context of the physical environment based on the identification of the detected physical objects”; Paragraph [0007]: “identify the detected physical objects, identify at least one unoccupied space in the physical environment, analyze the detected physical objects and the at least one unoccupied space, identify at least one suggested object for placement in the unoccupied space in the physical environment, and place at least one virtual representation corresponding to the at least one suggested object in a mixed reality scene of the physical environment generated by the computing device, at a position in the mixed reality scene corresponding to the at least one unoccupied space identified in the physical environment” Notes: processing is inherent given the use of a processor to execute the instructions)
In Paragraph [0004], Chojnacka describes core fundamentals of computer vision, which are using cameras or image sensors to detect and identify objects within the frame via comparison to a database of images. Classification models/engines operate by training said model on data of other objects with associated labels. When trained, the classification model will make a prediction on an observed object based on features derived from the image. The observed object is compared to the database of objects through the model.
In Paragraph [0007], Chojnacka describes using the identified physical objects for the purpose of understanding the context of the scene, which is then used to suggest an appropriate object to place in the scene. As noted previously, in its broadest reasonable interpretation, a virtual masking is modifying an initial image or scene by overlaying it with some entity that obscures or covers certain elements in the initial image. Hence, the generation of suggested items qualifies as a virtual masking.
Therefore, while Chojnacka does not explicitly mention computer vision and classification, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention that the use of computer vision and classification models/engines are obvious given the context provided in Chojnacka regarding identifying objects observed in an image, and using the context of the identified objects to suggest an object to generate in the scene.
Regarding Claim 2, the system in accordance with Claim 1 is rejected over Chojnacka.
Chojnacka teaches a system wherein the image sensor is configured to collect data including dimensions of the real-world environment (Paragraph [0004]: “In some implementations, detecting the physical objects in the physical environment may include capturing, by an image sensor of the computing device, image frames of the physical environment, detecting the physical objects in the image frames captured by the image sensor, comparing the detected physical objects to images stored in a database accessible to the computing device, and identifying the physical objects based on the comparison. In some implementations, detecting the physical objects may include detecting occupied spaces in the physical environment, including directing a plurality of rays from the computing device toward the physical environment as the image frames of the physical environment are captured, detecting a plurality of keypoints at points where the plurality of rays intersect with the physical objects, marking three-dimensional bounds of the physical objects in response to the detection of the plurality of keypoints, each of the plurality of keypoints being defined by a three-dimensional coordinate, and marking locations on a plane respectively corresponding to the physical objects as occupied spaces based on the marked bounds of each of the physical objects”).
Regarding Claim 3, the system in accordance with Claim 1 is rejected over Chojnacka.
Chojnacka teaches a system wherein the image sensor includes a camera (Paragraph [0008]: “In some implementations, the instructions may cause the computing device to capture, by an image sensor of the computing device, image frames of the physical environment”. Notes: cameras are inherent to capturing images).
Regarding Claim 4, the system in accordance with Claim 1 is rejected over Chojnacka.
Chojnacka teaches a system wherein the display device includes a computing device (Paragraph [0047]: “The augmented/mixed reality scene including the view of the physical environment may be, for example, a camera view of the physical environment captured by an imaging device of the computing device, and displayed on a display device of the computing device” Notes: display device is integrated with computing device).
Regarding Claim 5, the system in accordance with Claim 1 is rejected over Chojnacka.
Chojnacka teaches a system wherein data collected and generated by the image sensor includes image data of the real-world environment (Paragraph [0025]: “As shown in FIG. 3A, a user in the physical environment 100A may use a sensor, for example, a camera of the electronic device 110 to stream images of the physical environment 100A. As the images of the physical environment 100A are streamed, and the image frames are fed into the recognition algorithm or model, 3D pose and location information related to real objects in the physical environment 100A may be detected”),
Wherein physical dimensions are derived from the image data in at least one image (Paragraph [0004]: “In some implementations, detecting the physical objects in the physical environment may include capturing, by an image sensor of the computing device, image frames of the physical environment, detecting the physical objects in the image frames captured by the image sensor, comparing the detected physical objects to images stored in a database accessible to the computing device, and identifying the physical objects based on the comparison. In some implementations, detecting the physical objects may include detecting occupied spaces in the physical environment, including directing a plurality of rays from the computing device toward the physical environment as the image frames of the physical environment are captured, detecting a plurality of keypoints at points where the plurality of rays intersect with the physical objects, marking three-dimensional bounds of the physical objects in response to the detection of the plurality of keypoints, each of the plurality of keypoints being defined by a three-dimensional coordinate, and marking locations on a plane respectively corresponding to the physical objects as occupied spaces based on the marked bounds of each of the physical objects”).
Regarding Claim 6, the system in accordance with Claim 5 is rejected over Chojnacka.
Chojnacka teaches a system wherein the computer-executable instructions, when executed by the one or more processing engines, cause the one or more processing engines to determine dimensions of the real-world environment based on the 3D feature data (Paragraph [0004]: “In some implementations, detecting the physical objects in the physical environment may include capturing, by an image sensor of the computing device, image frames of the physical environment, detecting the physical objects in the image frames captured by the image sensor, comparing the detected physical objects to images stored in a database accessible to the computing device, and identifying the physical objects based on the comparison. In some implementations, detecting the physical objects may include detecting occupied spaces in the physical environment, including directing a plurality of rays from the computing device toward the physical environment as the image frames of the physical environment are captured, detecting a plurality of keypoints at points where the plurality of rays intersect with the physical objects, marking three-dimensional bounds of the physical objects in response to the detection of the plurality of keypoints, each of the plurality of keypoints being defined by a three-dimensional coordinate, and marking locations on a plane respectively corresponding to the physical objects as occupied spaces based on the marked bounds of each of the physical objects”; Paragraph [0007]: “In another general aspect, a system may include at least one computing device, including a memory storing executable instructions, and a processor configured to execute the instructions. Execution of the instructions may cause the at least one computing device to detect, by a sensor of the at least one computing device, physical objects in a physical environment, identify the detected physical objects, identify at least one unoccupied space in the physical environment, analyze the detected physical objects and the at least one unoccupied space, identify at least one suggested object for placement in the unoccupied space in the physical environment, and place at least one virtual representation corresponding to the at least one suggested object in a mixed reality scene of the physical environment generated by the computing device, at a position in the mixed reality scene corresponding to the at least one unoccupied space identified in the physical environment”).
Regarding Claim 7, the system in accordance with Claim 1 is rejected over Chojnacka.
Chojnacka teaches a system wherein the one or more processing engines include a recommendation engine configured to provide a product recommendation based at least on the candidate product category (Paragraph [0021]: “Using this mapped model of the 3D physical, real world space and real world objects, including a semantic understanding and 3D pose and location information of real objects, and available, unoccupied space, the system may recommend items, in particular, germane, pertinent, contextual items, for virtual placement”; Figure 5H-5J illustrate how various products belonging to a category can be generated/suggested. Notes: an engine, in its broadest reasonable interpretation, is a model or any other entity that provides a recommendation based on data).
Regarding Claim 8, the system in accordance with Claim 1 is rejected over Chojnacka.
Chojnacka teaches a system wherein the one or more processing engines include a recommendation engine configured to provide a product recommendation based at least on a profile of a user (Paragraph [0021]: “In some implementations, the suggested items for placement may be ranked by a recommendation model, that may be trained through pair-wise scoring between pairs of objects. In some implementations, this can include known/observed user brand interest, location, and other such factors”).
Regarding Claim 11, the system in accordance with Claim 1 is rejected over Chojnacka.
Chojnacka teaches a system wherein the virtual masking of the one or more designable areas is a generic representation of the at least one product from a product category presented based on the spatial attributes (Paragraph [0005]: “In some implementations, the computer-implemented method may also include determining a size of each of the unoccupied spaces based on at least one of a size, a contour, or an orientation of an identified physical object occupying one of the marked occupied spaces. Identifying at least one suggested object for placement in the unoccupied space in the physical environment may include determining a context of the physical environment based on the identification of the detected physical objects, and identifying the at least one suggested object based on the determined context of the physical environment and the identified at least one unoccupied space. In some implementations, identifying the at least one suggested object may include identifying the at least one suggested object based on the determined size of the at least one unoccupied space relative to a position and an orientation of the identified physical objects. In some implementations, identifying the at least one suggested object may include identifying the at least one suggested object based on at least one of user preferences, user profile information, user browsing history, or an inventory of available items”; Figures 5F-5J illustrate generic representations of recommended products of product categories that are considered semantically related to the scene in unoccupied spaces that are deemed designable spaces).
Regarding Claim 12, the system in accordance with Claim 1 is rejected over Chojnacka.
Chojnacka teaches a system wherein the computer-executable instructions, when executed by the one or more processing engines, cause the one or more processing engines to allow the one or more designable areas to be manipulated by a user (Paragraph [0050]: “Computing device 2000 includes a processor 2002, memory 2004, a storage device 2006”; Paragraph [0052]: “A computer program product can be tangibly embodied in an information carrier. The computer program product may also contain instructions that, when executed, perform one or more methods, such as those described above. The information carrier is a computer- or machine-readable medium, such as the memory 2004, the storage device 2006, or memory on processor 2002”; Paragraph [0024]: “In this example, the AR environment 200 may be an environment in which virtual objects may be selected and/or placed, by the system and/or by the user, allowing the user to view, interact with, and manipulate the virtual objects displayed within the AR environment 200”).
Claim 15, being similar in scope to Claim 1, is rejected under the same rationale.
Regarding Claim 16, Chojnacka teaches a computer implemented interior decoration system (Paragraph [0052]: A computer program product can be tangibly embodied in an information carrier. The computer program product may also contain instructions that, when executed, perform one or more methods, such as those described above. The information carrier is a computer- or machine-readable medium, such as the memory 2004, the storage device 2006, or memory on processor 2002; Paragraph [0003]: “In one general aspect, a computer-implemented method may include detecting, by a sensor of a computing device, physical objects in a physical environment, identifying the detected physical objects, identifying at least one unoccupied space in the physical environment, analyzing the detected physical objects and the at least one unoccupied space, identifying at least one suggested object for placement in the unoccupied space in the physical environment, and placing at least one virtual representation corresponding to the at least one suggested object in a mixed reality scene of the physical environment generated by the computing device, at a position in the mixed reality scene corresponding to the at least one unoccupied space identified in the physical environment”; Figures 5C-5J illustrate the processes performed)
operable by a user (Paragraph [0024]: “In this example, the AR environment 200 may be an environment in which virtual objects may be selected and/or placed, by the system and/or by the user, allowing the user to view, interact with, and manipulate the virtual objects displayed within the AR environment 200”),
the computer implemented interior decoration system comprising:
A collecting module for identifying an environment data in a reality viewpoint of the user (Paragraph [0002]: “Augmented, or mixed reality systems, or virtual reality systems, may allow users to view scenes, for example, scenes corresponding to their physical environment, and to augment the scenes of their physical environment with virtual objects”; Paragraph [0003]: “In one general aspect, a computer-implemented method may include detecting, by a sensor of a computing device, physical objects in a physical environment, identifying the detected physical objects, identifying at least one unoccupied space in the physical environment, analyzing the detected physical objects and the at least one unoccupied space, identifying at least one suggested object for placement in the unoccupied space in the physical environment, and placing at least one virtual representation corresponding to the at least one suggested object in a mixed reality scene of the physical environment generated by the computing device, at a position in the mixed reality scene corresponding to the at least one unoccupied space identified in the physical environment” Notes: a collecting module, in its broadest reasonable interpretation, is any entity that accumulates data, which is inherent in the use of a camera/sensor),
Comprising at least one environment information collector (Paragraph [0003]: “In one general aspect, a computer-implemented method may include detecting, by a sensor of a computing device, physical objects in a physical environment”; Paragraph [0041]: “As described above in detail, images of the physical environment 100B may be streamed, and the image frames may be fed into the recognition algorithm or model, to obtain 3D pose and location information, and identification/recognition of real objects in the physical environment 100B to provide for semantic understanding of the physical environment 100B”);
At least one processing module configured to receive the data from the collecting module and identify an area candidate for design (Paragraph [0007]: “In another general aspect, a system may include at least one computing device, including a memory storing executable instructions, and a processor configured to execute the instructions. Execution of the instructions may cause the at least one computing device to detect, by a sensor of the at least one computing device, physical objects in a physical environment, identify the detected physical objects, identify at least one unoccupied space in the physical environment, analyze the detected physical objects and the at least one unoccupied space, identify at least one suggested object for placement in the unoccupied space in the physical environment, and place at least one virtual representation corresponding to the at least one suggested object in a mixed reality scene of the physical environment generated by the computing device, at a position in the mixed reality scene corresponding to the at least one unoccupied space identified in the physical environment”);
An input module configured to receive optional input from the user to manipulate the area candidate for design (Paragraph [0024]: “In this example, the AR environment 200 may be an environment in which virtual objects may be selected and/or placed, by the system and/or by the user, allowing the user to view, interact with, and manipulate the virtual objects displayed within the AR environment 200”; Paragraph [0046]: “the computing device 600 may include a user interface system 620 including at least one output device and at least one input device… The at least one input device may include, for example, a touch input device that can receive tactile user inputs, a microphone that can receive audible user inputs, and the like”; Figures 5F-5J illustrate what is displayed to a user);
A module configured for augmenting the reality viewpoint with a virtual masking of said identified candidate area for design, presenting the augmented view to the user and configured to provide at least one suggestion on one or more candidate categories suitable for the identified candidate area for design (Paragraph [0024]: “In this example, the AR environment 200 may be an environment in which virtual objects may be selected and/or placed, by the system and/or by the user, allowing the user to view, interact with, and manipulate the virtual objects displayed within the AR environment 200”; Figures 5F-5J illustrate what is displayed to the user. Notes: Given that the task is performed in the system Chojnacka, a module configured for performing the task is inherent. The broadest reasonable interpretation of a virtual masking is modifying an initial image or scene by overlaying it with some entity that obscures or covers certain elements in the initial image. Hence, the generation of suggested items qualifies as a virtual masking);
Wherein the module for augmenting said reality viewpoint being further configured to present to said user an image of at least one virtual product from the one or more candidate product categories, thereby providing an augmented reality viewpoint of said reality viewpoint comprising the at least one virtual product in said identified candidate area for design (Paragraph [0005]: “In some implementations, the computer-implemented method may also include determining a size of each of the unoccupied spaces based on at least one of a size, a contour, or an orientation of an identified physical object occupying one of the marked occupied spaces. Identifying at least one suggested object for placement in the unoccupied space in the physical environment may include determining a context of the physical environment based on the identification of the detected physical objects, and identifying the at least one suggested object based on the determined context of the physical environment and the identified at least one unoccupied space. In some implementations, identifying the at least one suggested object may include identifying the at least one suggested object based on the determined size of the at least one unoccupied space relative to a position and an orientation of the identified physical objects. In some implementations, identifying the at least one suggested object may include identifying the at least one suggested object based on at least one of user preferences, user profile information, user browsing history, or an inventory of available items”; Paragraph [0029]: “This may allow the user to view the suggested items in the physical environment 100A, for consideration with regard to placement, purchase and the like”. Notes: the displayed objects are considered products because they are considered with regards to purchase).
Regarding Claim 17, Chojnacka teaches a computer implemented interior decoration method for providing an augmented reality viewpoint to a user (Paragraph [0052]: A computer program product can be tangibly embodied in an information carrier. The computer program product may also contain instructions that, when executed, perform one or more methods, such as those described above. The information carrier is a computer- or machine-readable medium, such as the memory 2004, the storage device 2006, or memory on processor 2002; Paragraph [0003]: “In one general aspect, a computer-implemented method may include detecting, by a sensor of a computing device, physical objects in a physical environment, identifying the detected physical objects, identifying at least one unoccupied space in the physical environment, analyzing the detected physical objects and the at least one unoccupied space, identifying at least one suggested object for placement in the unoccupied space in the physical environment, and placing at least one virtual representation corresponding to the at least one suggested object in a mixed reality scene of the physical environment generated by the computing device, at a position in the mixed reality scene corresponding to the at least one unoccupied space identified in the physical environment”; Figures 5C-5J illustrate the processes performed),
the computer implemented interior decoration method comprising:
identifying and collecting, by at least one module, a real-world environment data in a reality viewpoint of the user (Paragraph [0002]: “Augmented, or mixed reality systems, or virtual reality systems, may allow users to view scenes, for example, scenes corresponding to their physical environment, and to augment the scenes of their physical environment with virtual objects”; Paragraph [0003]: “In one general aspect, a computer-implemented method may include detecting, by a sensor of a computing device, physical objects in a physical environment, identifying the detected physical objects, identifying at least one unoccupied space in the physical environment, analyzing the detected physical objects and the at least one unoccupied space, identifying at least one suggested object for placement in the unoccupied space in the physical environment, and placing at least one virtual representation corresponding to the at least one suggested object in a mixed reality scene of the physical environment generated by the computing device, at a position in the mixed reality scene corresponding to the at least one unoccupied space identified in the physical environment” Notes: a collecting module, in its broadest reasonable interpretation, is any entity that accumulates data, which is inherent in the use of a camera/sensor),
using at least one environmental information collector for detecting and collecting two and/or three-dimensional information of the environment (Paragraph [0003]: “In one general aspect, a computer-implemented method may include detecting, by a sensor of a computing device, physical objects in a physical environment, identifying the detected physical objects, identifying at least one unoccupied space in the physical environment, analyzing the detected physical objects and the at least one unoccupied space, identifying at least one suggested object for placement in the unoccupied space in the physical environment, and placing at least one virtual representation corresponding to the at least one suggested object in a mixed reality scene of the physical environment generated by the computing device, at a position in the mixed reality scene corresponding to the at least one unoccupied space identified in the physical environment”; Paragraph [0004]: “In some implementations, detecting the physical objects may include detecting occupied spaces in the physical environment, including directing a plurality of rays from the computing device toward the physical environment as the image frames of the physical environment are captured, detecting a plurality of keypoints at points where the plurality of rays intersect with the physical objects, marking three-dimensional bounds of the physical objects in response to the detection of the plurality of keypoints, each of the plurality of keypoints being defined by a three-dimensional coordinate, and marking locations on a plane respectively corresponding to the physical objects as occupied spaces based on the marked bounds of each of the physical objects” Notes: a collecting module, in its broadest reasonable interpretation, is any entity that accumulates data, which is inherent in the use of a camera/sensor);
processing the data from the at least one information collector module to identify an area candidate for decoration (Paragraph [0004]: “In some implementations, detecting the physical objects in the physical environment may include capturing, by an image sensor of the computing device, image frames of the physical environment, detecting the physical objects in the image frames captured by the image sensor, comparing the detected physical objects to images stored in a database accessible to the computing device, and identifying the physical objects based on the comparison. In some implementations, detecting the physical objects may include detecting occupied spaces in the physical environment, including directing a plurality of rays from the computing device toward the physical environment as the image frames of the physical environment are captured, detecting a plurality of keypoints at points where the plurality of rays intersect with the physical objects, marking three-dimensional bounds of the physical objects in response to the detection of the plurality of keypoints, each of the plurality of keypoints being defined by a three-dimensional coordinate, and marking locations on a plane respectively corresponding to the physical objects as occupied spaces based on the marked bounds of each of the physical objects”; Paragraph [0005]: “In some implementations, the computer-implemented method may also include determining a size of each of the unoccupied spaces based on at least one of a size, a contour, or an orientation of an identified physical object occupying one of the marked occupied spaces. Identifying at least one suggested object for placement in the unoccupied space in the physical environment may include determining a context of the physical environment based on the identification of the detected physical objects, and identifying the at least one suggested object based on the determined context of the physical environment and the identified at least one unoccupied space. In some implementations, identifying the at least one suggested object may include identifying the at least one suggested object based on the determined size of the at least one unoccupied space relative to a position and an orientation of the identified physical objects. In some implementations, identifying the at least one suggested object may include identifying the at least one suggested object based on at least one of user preferences, user profile information, user browsing history, or an inventory of available items”); and
presenting the augmented reality viewpoint with the identified candidate area to the user (Paragraph [0024]: “In this example, the AR environment 200 may be an environment in which virtual objects may be selected and/or placed, by the system and/or by the user, allowing the user to view, interact with, and manipulate the virtual objects displayed within the AR environment 200”; Figures 5F-5J illustrate what is displayed to the user)
and providing, using at least one processing module, at least one suggestion on one or more candidate product categories suitable for said identified candidate area for decoration (Paragraph [0007]: “In another general aspect, a system may include at least one computing device, including a memory storing executable instructions, and a processor configured to execute the instructions. Execution of the instructions may cause the at least one computing device to detect, by a sensor of the at least one computing device, physical objects in a physical environment, identify the detected physical objects, identify at least one unoccupied space in the physical environment, analyze the detected physical objects and the at least one unoccupied space, identify at least one suggested object for placement in the unoccupied space in the physical environment, and place at least one virtual representation corresponding to the at least one suggested object in a mixed reality scene of the physical environment generated by the computing device, at a position in the mixed reality scene corresponding to the at least one unoccupied space identified in the physical environment”);
said predicting based on at least one of the suggested product categories being complementary to the identified reality viewpoint (Paragraph [0005]: “identifying the at least one suggested object based on the determined context of the physical environment and the identified at least one unoccupied space”) and
at least a location, dimensions, and characteristics of said identified candidate area (Paragraph [0005]: “identifying the at least one suggested object based on the determined context of the physical environment and the identified at least one unoccupied space. In some implementations, identifying the at least one suggested object may include identifying the at least one suggested object based on the determined size of the at least one unoccupied space relative to a position and an orientation of the identified physical objects”; Paragraph [0004]: “In some implementations, detecting the physical objects in the physical environment may include capturing, by an image sensor of the computing device, image frames of the physical environment, detecting the physical objects in the image frames captured by the image sensor, comparing the detected physical objects to images stored in a database accessible to the computing device, and identifying the physical objects based on the comparison. In some implementations, detecting the physical objects may include detecting occupied spaces in the physical environment, including directing a plurality of rays from the computing device toward the physical environment as the image frames of the physical environment are captured, detecting a plurality of keypoints at points where the plurality of rays intersect with the physical objects, marking three-dimensional bounds of the physical objects in response to the detection of the plurality of keypoints, each of the plurality of keypoints being defined by a three-dimensional coordinate, and marking locations on a plane respectively corresponding to the physical objects as occupied spaces based on the marked bounds of each of the physical objects”).
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Chojnacka (US 20200342668 A1) in view of Singh (WO 2019116394 A1).
Regarding Claim 9, the system in accordance with Claim 1 is rejected over Chojnacka.
Chojnacka teaches a system wherein the one or more processing engines include a recommendation engine (Paragraph [0021]: “In some implementations, the suggested items for placement may be ranked by a recommendation model”. Notes: an engine, in its broadest reasonable interpretation, is a model or any other entity that provides a recommendation based on data) based on multiple factors.
Chojnacka does not teach a system wherein the one or more processing engines include a recommendation engine based at least on a past purchase history of a system user.
However, Singh teaches a recommendation engine configured to select and provide a product recommendation based at least on a past purchase history of a system user (Paragraph [0028]: “In one embodiment, the collaboration module 204 may further implement one or more algorithms for recommending one or more virtual objects to be placed in the spatial representation based on at least a spatial computation, context matching, a budget, a purchase history, a browsing history of the user and the like.”).
Chojnacka and Singh are considered analogous in the art with respect to the use of a recommendation engine for the placement of virtual objects in a representation of a space. A common motivation in the art would be to tailor recommendations to users, such that as many relevant factors related to the user are satisfied, as seen in both Chojnacka and Singh.
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the recommendation engine of Chojnacka with the recommendation engine of Singh; doing so would yield the predictable result of a recommendation engine based on a past purchase history of a user, which would provide recommendations more likely to be accepted by the user.
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Chojnacka as modified in view of Sivan (US 20210082024A1).
Chojnacka as modified teaches a system wherein the one or more processing engines include a recommendation engine (Chojnacka, Paragraph [0021]: “In some implementations, the suggested items for placement may be ranked by a recommendation model”. Notes: an engine, in its broadest reasonable interpretation, is a model or any other entity that provides a recommendation based on data) based on multiple factors.
Chojnacka as modified does not teach a system wherein the one or more processing engines include a recommendation engine configured to select and provide a product recommendation based at least on a merchant product availability.
However, Sivan teaches a recommendation engine configured to select and provide a product recommendation based at least on a merchant product availability (Paragraph [0064]: “Product recommendation engine 336 can determine one or more product recommendations to be provided on the customer facing device 318 based on any of a variety of factors, such as the products identified by the product identification system 332, the products that are determined to be available as determined by the product availability system 334, and/or purchasing models (e.g., user models 340, other purchasing models 342)”)
Chojnacka as modified and Sivan are considered analogous in the art with respect to a recommendation engine for product/object. A common motivation in the art would be to tailor recommendations to users, such that as many relevant factors related to the user are satisfied, as seen in both Chojnacka as modified and Sivan.
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the recommendation engine of Chojnacka as modified with the recommendation engine of Sivan; Doing so would yield the predictable result of a recommendation engine based on merchant product availability, which would provide recommendations more likely to be accepted by the user.
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Chojnacka as modified, in view of Govindarajan (US 20180101536 A1).
Regarding Claim 13, the system in accordance with Claim 12 is rejected over Chojnacka as modified.
Chojnacka as modified teaches a system wherein the manipulation of the one or more designable areas comprises at least one of moving, changing of the designable area in some manner, and/or changing the product (Chojnacka, Paragraph [0045]: “In some implementations, the user may interact with the virtual objects representing the cup of coffee 550V, the beverage glass 555V, one of the entrees 560V, 565V, 570V and the like to, for example, select the corresponding item. In interacting with a particular virtual object to select the corresponding suggested item, the user may, for example, access additional information related to the item, order the item, move or in some manner change the item, and the like”; Chojnacka, Paragraph [0021]: “In some implementations, the suggested items for placement may be ranked by a recommendation model, that may be trained through pair-wise scoring between pairs of objects. In some implementations, this can include known/observed user brand interest, location, and other such factors. This suggestion and placement of relevant virtual objects into the physical, real world environment may allow a user to view and identify pertinent items which may be of interest in a particular situation, without the need for extensive searching”); Chojnacka, Paragraph [0024]: “In this example, the AR environment 200 may be an environment in which virtual objects may be selected and/or placed, by the system and/or by the user, allowing the user to view, interact with, and manipulate the virtual objects displayed within the AR environment 200”)
through an input module (Chojnacka, Paragraph [0046]: “the computing device 600 may include a user interface system 620 including at least one output device and at least one input device… The at least one input device may include, for example, a touch input device that can receive tactile user inputs, a microphone that can receive audible user inputs, and the like”).
Chojnacka as modified does not explicitly teach a system wherein the manipulation of the one or more designable areas comprises altering size and dimension, and/or changing the product category through an input module.
However, a person having ordinary skill in the art would appreciate that altering size and dimension is obvious given the context of Chojnacka as modified (Chojnacka, Paragraph [0045]: “In some implementations, the user may interact with the virtual objects representing the cup of coffee 550V, the beverage glass 555V, one of the entrees 560V, 565V, 570V and the like to, for example, select the corresponding item. In interacting with a particular virtual object to select the corresponding suggested item, the user may, for example, access additional information related to the item, order the item, move or in some manner change the item, and the like”), where “in some manner change the item”, in the context of a 3D environment, involves changing the physical appearance of the item, of which includes altering size and dimension.
Furthermore, Govindarajan teaches changing category of an object via user interface (Paragraph [0054]: “FIG. 4 illustrates example user interface providing ranked search results matching a search query… As shown in FIG. 4, the user interface shows results under “Home,” and includes the three categories. The user interface provide users an option to select a specific category by clicking a tab corresponding to the specific category. For example, a user may choose “People” and be provided with objects in the category of People. The search results can include more objects (or objects in different categories) than those shown in FIG. 4”).
Chojnacka as modified and Govindarajan are considered analogous in the art with respect to matching an input to a recommended output for a user, of which outputs may belong to different categories. While Govindarajan is matching results to a search query, it functions similarly in principle to a recommendation engine, since both provide the closest fit(s) to a query/input based on a similarity ranking. A common motivation in matching and recommendation engines are to enable the viewing and selection of recommended entities that may belong to different classes or categories, as evident in Govindarajan; It is noted that in Chojnacka as modified, possible object categories are ranked via the recommendation model, where more than one object category may be applicable as a recommendation inherently; however, Chojnacka does not explicitly state that the category of the object itself can be switched by the user, although there is support for doing so, considering objects can be broadly suggested based on context and swapped by the user.
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the recommendation engine and display of products of Chojnacka with the capability of displaying and choosing different categories of entities that match an input of Govindarajan; Doing so would yield the predictable result of allowing a user to choose suggested products from different categories for a designable area.
Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Chojnacka as modified, in view of Lanier (US 20180075657 A1).
Regarding Claim 14, the system in accordance with Claim 1 is rejected over Chojnacka as modified.
Chojnacka as modified does not teach a processing engine that includes a module configured to allow sharing the modified image sequence.
However, Lanier teaches a system wherein the one or more processing engines include a module configured to allow sharing the modified image sequence (Paragraph [0078]: “Examples of device(s) 108 can include but are not limited to mobile computers, embedded computers, or combinations thereof…Example embedded computers can include network enabled televisions, integrated components for inclusion in a computing device, appliances, microcontrollers, digital signal processors, or any other sort of processing device, or the like. In at least one example, the devices 108 can include mixed reality devices”; Paragraph [0065]: “In at least one example, the user (e.g., user 106A) associated with a device (e.g., device 108A) that initially requests the virtual content item can be the owner of the virtual content item such that he or she can modify the permissions associated with the virtual content item. In at least one example, the owner of the virtual content item can determine which other users (e.g., user 106B and/or user 106C) can view the virtual content item (i.e., whether the virtual content item is visible to the other users 106”).
Chojnacka as modified and Lanier are considered analogous in the art with respect to designing attributes of spaces for display to a user. A common motivation in the art is to enable a user to share a space with edited attributes with other people or users for feedback or collaborative design.
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the system for designing a space of Chojnacka as modified with the sharing capabilities of the designing system of Lanier; Doing so would yield the predictable result of being able to share a space with recommended objects in designable areas with other people or users.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RAYMOND CHUN LAM LI whose telephone number is (571)272-5124. The examiner can normally be reached M-F 8:30-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at 571-272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RAYMOND CHUN LAM LI/Examiner, Art Unit 2614
/KENT W CHANG/Supervisory Patent Examiner, Art Unit 2614