DETAILED ACTION
Claims 1-12 & 14-20 are currently pending and have been examined in this application. This communication is the first action on the merits.
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 9/17/24 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-12 & 14-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
The claims are either directed to a system or method, which is one of the statutory categories of invention. (Step 1: YES).
The Examiner has identified method Claim 1 as the claim that represents the claimed invention for analysis and is similar to system Claim 12 & 14. Claim 1 recites the limitations of (additional elements emphasized in bold and are considered to be parsed from the remaining abstract idea):
A method for visualizing one or more products in a virtual scene, the one or more products including a first product, the method comprising: using at least one computer hardware processor to perform: obtaining, from a sensing platform having positioned thereon a first physical object representing the first product, a first pose of the first physical object on the sensing platform and a first identifier of the first product; identifying, using the first identifier and from among a plurality of three-dimensional (3D) models corresponding to a respective plurality of products, a first 3D model corresponding to the first product; generating a visualization of the one or more products in the virtual scene at least in part by: generating, at a position and orientation in the virtual scene determined from the first pose of the first physical object, a visualization of the first product using the first 3D model of the first product; and providing the visualization to a display device for displaying the visualization.
which is a process that, under its broadest reasonable interpretation, covers performance of the limitation(s) as a Certain method of organizing human activity (commercial interaction), or a Mental process (concept performed in the human mind) of visualizing physical products.
If a claim limitation, under its broadest reasonable interpretation (BRI), covers performance of the limitation as a certain method of a commercial interaction, then it falls within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas.
Similarly if a claim limitation under its BRI, covers performance of the limitation in the human mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. (Claims can recite a mental process even if they are claimed as being performed on a computer Gottschalk v. Benson, 409 U.S. 63; "Courts have examined claims that required the use of a computer and still found that the underlying, patent-ineligible invention could be performed via pen and paper or in a person’s mind." Versata Dev. Group v. SAP Am., Inc., 793 F.3d 1306, 1335, 115 USPQ2d 1681, 1702 (Fed. Cir. 2015).)
Accordingly, the claim recites an abstract idea. (Step 2A-Prong 1: YES. The claims are abstract)
This judicial exception is not integrated into a practical application. Limitations that are not indicative of integration into a practical application include: (1) Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05.f), (2) Adding insignificant extra-solution activity to the judicial exception (MPEP 2106.05.g), (3) Generally linking the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05.h). The virtual scene, sensing platform, processor, 3D models and display device in Claim 1 are just using generic computer components (similarly, the non-transitory CRM of Claim 12) . The computer hardware is recited at a high-level of generality (i.e., as a generic processor performing a generic computer function) such that it amounts to no more than mere instructions to implement an abstract idea by adding the words “apply it” (or an equivalent) with the judicial exception. Accordingly, these additional elements, when considered separately and as an ordered combination, do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Therefore claim 1 is directed to an abstract idea without a practical application. (Step 2A-Prong 2: NO. The additional claimed elements are not integrated into a practical application)
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because, when considered separately and as an ordered combination, they do not add significantly more (also known as an “inventive concept”) to the exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using computer hardware amounts to no more than mere instructions to implement an abstract idea by adding the words “apply it” (or an equivalent) with the judicial exception. Mere instructions to implement an abstract idea on or with the use of generic computer components, cannot provide an inventive concept - rendering the claim patent ineligible. Thus claim 1 is not patent eligible. (Step 2B: NO. The claims do not provide significantly more)
The dependent claims further define the abstract idea that is present in their respective independent claims and hence are abstract for at least the reasons presented above. The dependent claims do not include any additional elements (including Claim 5 – ArUco marker/QR code are data representations, Claim 11 – ray tracing – which further is implementing the abstract idea on a generic computer component, Claim 15 – imaging device – which further implements the abstract idea on a generic computer component, Claim 20 – projector and screen – which are generic computer components used to implement the abstract idea) that integrate the abstract idea into a practical application or are sufficient to amount to significantly more than the judicial exception when considered both individually and as an ordered combination. Therefore, the dependent claims are directed to an abstract idea. Thus, the aforementioned claims are not patent-eligible.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-4, 6-7 & 14, 17 & 19 are rejected under 35 U.S.C. 103 as being unpatentable over Arbter-A (EP 1335317, Fig. 1) in view of Arbter-B (embodiment referencing EP 1047014-see 0011 & 0030).
Claim 1.
Arbter teaches the following limitations:
A method for visualizing one or more products in a virtual scene,
([0027] With the device shown in FIG. 1, in a virtual reality, the position of an object 1 on a graphic display device 3 controlled via a computer 2 can be easily adjusted. The eye 4 of a user views the virtually represented object 1 on the representation device 3.)
the one or more products including a first product, the method comprising: using at least one computer hardware processor to perform:
(See Fig. 1, product corresponds to car 1; [0031] The computer 2, the device shown in FIG. 1 for displaying an object 1 in a virtual reality in the desired position, has a device 15 for image processing and a computer graphics device 16.)
PNG
media_image1.png
727
437
media_image1.png
Greyscale
obtaining, from a sensing [platform having positioned thereon] a first physical object representing the first product, a first pose of the first physical object on the sensing platform and a first identifier of the first product;
([0028] The model object 6 is a portable, easily graspable object, for example, as in FIG. 1, a block or a physical model of the object 1 to be virtually represented. The model object 6 observed by an electronic camera 7 has markings 9 on its underside, which are arranged such that it is suitable, on the one hand, for determining the spatial object position from the camera image and, on the other hand, also for distinguishing different objects 1 to be represented, that is to say for identification. Fig. 1)
identifying, using the first identifier and from among a plurality of three-dimensional (3D) models corresponding to a respective plurality of products, a first 3D model corresponding to the first product;
([0028] The model object 6 is a portable, easily graspable object, for example, as in FIG. 1, a block or a physical model of the object 1 to be virtually represented. The model object 6 observed by an electronic camera 7 has markings 9 on its underside, which are arranged such that it is suitable, on the one hand, for determining the spatial object position from the camera image and, on the other hand, also for distinguishing different objects 1 to be represented, that is to say for identification.)
generating a visualization of the one or more products in the virtual scene at least in part by: generating, at a position and orientation in the virtual scene determined from the first pose of the first physical object, a visualization of the first product using the first 3D model of the first product; and
([0031] The computer 2, the device shown in FIG. 1 for displaying an object 1 in a virtual reality in the desired position, has a device 15 for image processing and a computer graphics device 16. In this case, the device 15 for image processing contains means for recording the images, segmenting means for separating the image of the marking symbol 9 from all remaining image parts, means for object identification for calculating the spatial object position from the camera image of the markings 9. The computer graphics unit 16 contains means for storing virtual 3D objects and means for calculating the views of objects 1 to be virtually represented depending on the respective observer position.)
providing the visualization to a display device for displaying the visualization.
([Fig. 1], object 1 displayed on display device 3)
Arbter-A does not explicitly teach, however Arbter-B teaches the following limitation:
[sensing] platform having positioned thereon [a first physical object]
([0030] the optical transmissive disc 14 can be fitted with the physical model object 6 in the detection range of the electronic camera 7, so that it can be easily guided on the surface of the disc 14 with the user's hand 5 in those cases in which only the three degrees of freedom of movement on the disc are to be controlled, i.e. a displacement on the disc 14 and a rotation about a vertical axis perpendicular to the plane of the disc.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to be motivated to modify Arbter-A with Arbter-B in order to position the physical 3D object on the surface of the sensing platform [ArbterB –0030].
Claim 2.
Arbter-A in combination with the references taught in Claim 1 teach those respective limitations. Arbter-A further teaches:
wherein the first physical object has a marker on its surface, the method further comprising: detecting the marker on the surface of the first physical object; and determining, using the marker, the first pose of the first physical object and the first identifier of the first product.
([Fig. 1]; [0028] The model object 6 observed by an electronic camera 7 has markings 9 on its underside, which are arranged such that it is suitable, on the one hand, for determining the spatial object position from the camera image and, on the other hand, also for distinguishing different objects 1 to be represented, that is to say for identification. [0022] The particular advantage of such a three-dimensional marking structure is moreover that the different tilt angles can be detected with only one camera. [0031])
Claim 3.
Arbter-A in combination with the references taught in Claim 1 teach those respective limitations. Arbter-A further teaches:
wherein the one or more products comprise multiple products, the multiple products including the first product, the method comprising:
([0009] The results are transmitted to a graphics workstation on which the associated objects, for example one, two or more cars of different make, are displayed in the positions specified by the operator. In order to ensure simple, rapid and reliable segmentation of the markings in the camera image, the markings are either made of retroreflective material and directly illuminated, or made of scattering, colored material and indirectly illuminated.)
obtaining, from the sensing platform [having positioned thereon] multiple physical objects representing the multiple products, poses of the physical objects on the sensing platform and identifiers of the multiple products, the poses including the first pose of the first physical object and the identifiers including the first identifier of the first product;
([0028] The model object 6 is a portable, easily graspable object, for example, as in FIG. 1, a block or a physical model of the object 1 to be virtually represented. The model object 6 observed by an electronic camera 7 has markings 9 on its underside, which are arranged such that it is suitable, on the one hand, for determining the spatial object position from the camera image and, on the other hand, also for distinguishing different objects 1 to be represented, that is to say for identification. Fig. 1, [0009])
identifying 3D models of the multiple products using the identifiers; and
([0028] The model object 6 is a portable, easily graspable object, for example, as in FIG. 1, a block or a physical model of the object 1 to be virtually represented. The model object 6 observed by an electronic camera 7 has markings 9 on its underside, which are arranged such that it is suitable, on the one hand, for determining the spatial object position from the camera image and, on the other hand, also for distinguishing different objects 1 to be represented, that is to say for identification. [0009])
generating the visualization of the one or more products in the virtual scene at least in part by generating, at positions and orientations in the virtual scene determined from the poses of the physical objects, visualizations of the multiple products using the 3D models of the multiple products.
([0031] The computer 2, the device shown in FIG. 1 for displaying an object 1 in a virtual reality in the desired position, has a device 15 for image processing and a computer graphics device 16. In this case, the device 15 for image processing contains means for recording the images, segmenting means for separating the image of the marking symbol 9 from all remaining image parts, means for object identification for calculating the spatial object position from the camera image of the markings 9. The computer graphics unit 16 contains means for storing virtual 3D objects and means for calculating the views of objects 1 to be virtually represented depending on the respective observer position.)
Arbter-A does not explicitly teach, however Arbter-B teaches the following limitation:
[sensing] platform having positioned thereon [a first physical object]
([0030] the optical transmissive disc 14 can be fitted with the physical model object 6 in the detection range of the electronic camera 7, so that it can be easily guided on the surface of the disc 14 with the user's hand 5 in those cases in which only the three degrees of freedom of movement on the disc are to be controlled, i.e. a displacement on the disc 14 and a rotation about a vertical axis perpendicular to the plane of the disc.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to be motivated to modify Arbter-A with Arbter-B in order to position the physical 3D object on the surface of the sensing platform [ArbterB –0030].
Claim 4.
Arbter-A in combination with the references taught in Claim 3 teach those respective limitations. Arbter-A further teaches:
detecting markers on surfaces of the physical objects; and determining, using the markers, the poses of the physical objects on the sensing platform and the identifiers of the multiple products.
([Fig. 1]; [0009] The results are transmitted to a graphics workstation on which the associated objects, for example one, two or more cars of different make, are displayed in the positions specified by the operator. In order to ensure simple, rapid and reliable segmentation of the markings in the camera image, the markings are either made of retroreflective material and directly illuminated, or made of scattering, colored material and indirectly illuminated. [0022] The particular advantage of such a three-dimensional marking structure is moreover that the different tilt angles can be detected with only one camera. [0028] The model object 6 observed by an electronic camera 7 has markings 9 on its underside, which are arranged such that it is suitable, on the one hand, for determining the spatial object position from the camera image and, on the other hand, also for distinguishing different objects 1 to be represented, that is to say for identification. [0031] The computer 2, the device shown in FIG. 1 for displaying an object 1 in a virtual reality in the desired position, has a device 15 for image processing and a computer graphics device 16. In this case, the device 15 for image processing contains means for recording the images, segmenting means for separating the image of the marking symbol 9 from all remaining image parts, means for object identification for calculating the spatial object position from the camera image of the markings 9. The computer graphics unit 16 contains means for storing virtual 3D objects and means for calculating the views of objects 1 to be virtually represented depending on the respective observer position.)
Claim 6.
Arbter-A in combination with the references taught in Claim 1 teach those respective limitations. Arbter-A further teaches:
displaying the generated visualization of the one or more products in the virtual scene using the display device.
([Fig. 1], object 1 displayed on display device 3)
Claim 7.
Arbter-A in combination with the references taught in Claim 1 teach those respective limitations. Arbter-A further teaches:
wherein the first physical object is a physical 3D model of the first product.
([0028] The model object 6 is a portable, easily graspable object, for example, as in FIG. 1, a block or a physical model of the object 1 to be virtually represented. The model object 6 observed by an electronic camera 7 has markings 9 on its underside, which are arranged such that it is suitable, on the one hand, for determining the spatial object position from the camera image and, on the other hand, also for distinguishing different objects 1 to be represented, that is to say for identification.)
Claim 14.
A system for visualizing one or more products in a virtual scene, the one or more products including a first product, the system comprising:
([0027] With the device shown in FIG. 1, in a virtual reality, the position of an object 1 on a graphic display device 3 controlled via a computer 2 can be easily adjusted. The eye 4 of a user views the virtually represented object 1 on the representation device 3.)
a sensing [platform having positioned thereon] a first physical object representing the first product;
([0028] The model object 6 is a portable, easily graspable object, for example, as in FIG. 1, a block or a physical model of the object 1 to be virtually represented. The model object 6 observed by an electronic camera 7 has markings 9 on its underside, which are arranged such that it is suitable, on the one hand, for determining the spatial object position from the camera image and, on the other hand, also for distinguishing different objects 1 to be represented, that is to say for identification. Fig. 1)
at least one computer hardware processor configured to:
(See Fig. 1, product corresponds to car 1; [0031] The computer 2, the device shown in FIG. 1 for displaying an object 1 in a virtual reality in the desired position, has a device 15 for image processing and a computer graphics device 16.)
the display device configured to display the visualization.
([Fig. 1], object 1 displayed on display device 3)
Arbter-A does not explicitly teach, however Arbter-B teaches the following limitation:
[sensing] platform having positioned thereon [a first physical object]
([0030] the optical transmissive disc 14 can be fitted with the physical model object 6 in the detection range of the electronic camera 7, so that it can be easily guided on the surface of the disc 14 with the user's hand 5 in those cases in which only the three degrees of freedom of movement on the disc are to be controlled, i.e. a displacement on the disc 14 and a rotation about a vertical axis perpendicular to the plane of the disc.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to be motivated to modify Arbter-A with Arbter-B in order to position the physical 3D object on the surface of the sensing platform [ArbterB –0030].
The remainder of the claim is rejected using the same rationale as Claim 1.
Claim 17.
Arbter-A in combination with the references taught in Claim 14 teach those respective limitations. Arbter-A further teaches:
wherein the at least one computer hardware processor is part of the sensing platform.
(See Fig. 1, product corresponds to car 1; [0031] The computer 2, the device shown in FIG. 1 for displaying an object 1 in a virtual reality in the desired position, has a device 15 for image processing and a computer graphics device 16.)
Claim 19.
Arbter-A in combination with the references taught in Claim 14 teach those respective limitations. Arbter-A further teaches:
wherein the first physical object is a physical 3D model of the first product, a card having an image of the first product thereon, or a swatch of a material having the image of the first product thereon.
([0028] The model object 6 is a portable, easily graspable object, for example, as in FIG. 1, a block or a physical model of the object 1 to be virtually represented. The model object 6 observed by an electronic camera 7 has markings 9 on its underside, which are arranged such that it is suitable, on the one hand, for determining the spatial object position from the camera image and, on the other hand, also for distinguishing different objects 1 to be represented, that is to say for identification. Fig. 1)
Claim(s) 5, 8-9, 12 are rejected under 35 U.S.C. 103 as being unpatentable over Arbter-A (EP 1335317) in view of Arbter-B (embodiment referencing EP 1047014-see 0011 & 0030), and further in view of Erez (US 20220156684).
Claim 5.
Arbter-A in combination with the references taught in Claim 2 teach those respective limitations. Arbter-A further teaches:
wherein the first marker comprises an ArUco marker and/or QR code.
([0019] RealGift card may be activated…by presenting the card to the cashier (human or automated), who can potentially scan, e.g., a barcode or QR code)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to be motivated to modify Arbter-A with Erez in order to recognize a marker using a QR code [Erez – 0019].
Claim 8.
Arbter-A in combination with the references taught in Claim 1 teach those respective limitations. Arbter-A does not explicitly teach the following limitations, however Erez teaches:
wherein the first physical object has an image of the first product thereon.
([0019] A RealGift card may be for example a physical card or piece of paper, e.g., a credit card-sized plastic item, with information printed on it [0120] a RealGift card was implemented, e.g., as a gift card with an image of a product)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to be motivated to modify Arbter-A with Erez in order to have a physical card with image of a product on it to achieve more thoughtfulness in providing a gift card to someone [Erez – 0007, 0120].
Claim 9.
Arbter-A in combination with the references taught in Claim 8 teach those respective limitations. Arbter-A does not explicitly teach the following limitations, however Erez teaches:
wherein the first physical object is a card or a swatch of material.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to be motivated to modify Arbter-A with Erez in order to have a physical card with image of a product on it to achieve more thoughtfulness in providing a gift card to someone [Erez – 0007, 0120].
Claim 12.
Arbter-A teaches the following limitations:
A system for visualizing one or more products in a virtual scene, the one or more products including a first product, the system comprising: at least one computer hardware processor; and … a method for visualizing one or more products in a virtual scene, the one or more products including a first product, the method comprising:
(Fig. 1)
Arbter-A does not explicitly teach the following limitations, however Erez teaches:
at least one non-transitory computer-readable storage medium storing processor executable instructions that, when executed by the at least one computer hardware processor, cause the at least one computer hardware processor to perform
([0068] The modules can be implemented as hardware components, software modules, or any combination thereof. For example, the modules described can be software modules implemented as instructions on a non-transitory memory storing instructions capable of being executed by a processor or a controller on a machine described in FIG. 15. In some embodiments the processor described in FIG. 15 may be configured to carry out embodiments of the present invention and/or execute modules as described herein by for example executing code stored in memory. [0225] A storage medium typically may be non-transitory or comprise a non-transitory device.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to be motivated to modify Arbter-A with Erez in order to provide a system with a non-transitory CRM storing instructions executable by a processor to perform a method [Erez – 0220].
The remainder of the claim is rejected using the same rationale as Claim 1.
Claim(s) 10-11 is rejected under 35 U.S.C. 103 as being unpatentable over Arbter-A (EP 1335317) in view of Arbter-B (embodiment referencing EP 1047014-see 0011 & 0030), and further in view of Rafii (US 201800114264).
Claim 10.
Arbter-A in combination with the references taught in Claim 1 teach those respective limitations. Arbter-A does not explicitly teach the following limitations, however Rafii teaches:
wherein the first product comprises furniture or art.
([0050] The personal device may then stage a threedimensional model of a product (e.g., a couch) within the 3D model of the environment of the shopper's living room)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to be motivated to modify Arbter-A with Rafii in order to substitute a product with furniture such as a couch [Rafii - 0050]. The simple substitution of one known element for another producing a predictable result renders the claim obvious..
Claim 11.
Arbter-A in combination with the references taught in Claim 1 teach those respective limitations. Arbter-A does not explicitly teach the following limitations, however Rafii teaches:
rendering the generated visualization of the one or more products in the virtual scene using a ray tracing technique.
([0078] In operation 270, the system renders the 3D model of the object within the virtual 3D environment using a 3D rendering engine (e.g., a raytracing engine) from the perspective of the virtual camera.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to be motivated to modify Arbter-A with Rafii in order to render the visualization using ray tracing [Rafii - 0078].
Claim(s) 15-16, 18 is rejected under 35 U.S.C. 103 as being unpatentable over Arbter-A (EP 1335317) in view of Arbter-B (embodiment referencing EP 1047014-see 0011 & 0030), and further in view of Rosenberg (US 20200159361).
Claim 15.
Arbter-A in combination with the references taught in Claim 14 teach those respective limitations. Arbter-A further teaches:
wherein the sensing platform comprises: a [translucent] surface
([0028] The model object 6 is a portable, easily graspable object, for example, as in FIG. 1, a block or a physical model of the object 1 to be virtually represented. The model object 6 observed by an electronic camera 7 has markings 9 on its underside, which are arranged such that it is suitable, on the one hand, for determining the spatial object position from the camera image and, on the other hand, also for distinguishing different objects 1 to be represented, that is to say for identification. Fig. 1)
an imaging device placed in proximity to the [translucent] surface.
([0028] The model object 6 is a portable, easily graspable object, for example, as in FIG. 1, a block or a physical model of the object 1 to be virtually represented. The model object 6 observed by an electronic camera 7 has markings 9 on its underside, which are arranged such that it is suitable, on the one hand, for determining the spatial object position from the camera image and, on the other hand, also for distinguishing different objects 1 to be represented, that is to say for identification. Fig. 1)
Arbter-A does not explicitly teach, however Arbter-B teaches the following limitation:
on which the first physical object representing the first product is positioned; and
([0030] the optical transmissive disc 14 can be fitted with the physical model object 6 in the detection range of the electronic camera 7, so that it can be easily guided on the surface of the disc 14 with the user's hand 5 in those cases in which only the three degrees of freedom of movement on the disc are to be controlled, i.e. a displacement on the disc 14 and a rotation about a vertical axis perpendicular to the plane of the disc.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to be motivated to modify Arbter-A with Arbter-B in order to position the physical 3D object on the surface of the sensing platform [ArbterB –0030].
Arbter-A does not explicitly teach the following limitation(s), however Rosenberg teaches the following limitation:
a translucent surface
([0131] In one variation, the first method Sl00 and/or the second S200 are executed by a singular (e.g., unitary) device including a computer system, a digital display, a transparent (or translucent) touch sensor surface arranged over the digital display.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to be motivated to substitute the transparent surface of Arbter-A and the combined references with Rosenberg’s translucent surface in order image a product. The simple substitution of one known element for another producing a predictable result renders the claim obvious.
Claim 16.
Arbter-A in combination with the references taught in Claim 15 teach those respective limitations. Arbter-A further teaches:
wherein: the first physical object has a marker on its surface, and the imaging device is configured to capture at least one image of the marker.
([Fig. 1]; [0028] The model object 6 observed by an electronic camera 7 has markings 9 on its underside, which are arranged such that it is suitable, on the one hand, for determining the spatial object position from the camera image and, on the other hand, also for distinguishing different objects 1 to be represented, that is to say for identification. [0022] The particular advantage of such a three-dimensional marking structure is moreover that the different tilt angles can be detected with only one camera. [0031])
Claim 18.
Arbter-A in combination with the references taught in Claim 14 teach those respective limitations. Arbter-A does not explicitly teach the following limitation(s), however Rosenberg teaches the following limitation:
wherein the at least one computer hardware processor is remote from the sensing platform.
(Rosenberg – [0027] The computer system can then process each touch image remotely from the input device, including identifying discrete input areas, calculating a peak force per discrete input area, calculating a total force per discrete input area, labeling or characterizing a type of physical object contacting the touch sensor surface and corresponding to each discrete input area, etc., such as described below. Alternatively, the input device can locally process a touch image before uploading the touch image and/or data extracted from the touch image ( e.g., types, force magnitudes, and locations of discrete input areas) to the computer system.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to be motivated to modify Arbter-A with Rosenberg in order to process received images remotely from an input device [Rosenberg - 0027].
Claim(s) 20 is rejected under 35 U.S.C. 103 as being unpatentable over Arbter-A (EP 1335317, Fig. 1) in view of Arbter-B (embodiment referencing EP 1047014-see 0011 & 0030), and further in view of Reichow (US 20200089015).
Claim 20.
Arbter-A in combination with the references taught in Claim 14 teach those respective limitations. Arbter-A does not explicitly teach the following limitation(s), however Reichow teaches the following limitation:
wherein the display device comprises a projector and a screen.
(Fig. 3, [0041] Display system 300 includes a projector 360 and the projection screen 250.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to be motivated to modify Arbter-A with Reichow in order to display images through the projection of light involving a projector and screen [Reichow – 0034, 0041].
Conclusion
The prior art made of record, and not relied upon, considered pertinent to applicant' s disclosure or directed to the state of art is listed on the enclosed PTO-892.
The following is a brief description for relevant prior art that was cited but not applied:
Kanoria (US 20240005594) provides a method for virtualization of tangible object components.
Wade (US 20210248669) provides systems and methods for generating augmented reality scenes for physical items.
Whytock (US 20130002591) provides systems and methods for virtual object adjustment via physical object detection.
Arbter (EP 1047014) provides a method for controlling the position of a graphically displayed object.
Mott (US 20190156585) provides an augmented reality product preview.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ABDULMAJEED AZIZ whose telephone number is (571)270-5046. The examiner can normally be reached M-F 7-3:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ALLANA LEWIN BIDDER can be reached at (571) 272-5560. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ABDULMAJEED AZIZ/Supervisory Patent Examiner, Art Unit 2875