DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claims 20, 32 and 35 are objected to because of the following informalities:
For claim 20, Examiner believes this claim should be amended in the following manner:
The system of claim 18, wherein the processor is further configured to:
receive a user input selecting the at least one hotspot, the at least one hotspot enabling access to [[a]] the separate media element when selected by a viewer of the interactive rotatable 360-degree presentation of the object; and
display, in response to the user input, the separate media element.
For claim 32, Examiner believes this claim should be amended in the following manner:
A computer-implemented method, the computer-implemented method comprising:
obtaining data describing an object, the data describing the [[first]] object including information about a plurality of images of the object to be obtained and converted into an interactive rotatable 360-degree presentation of the object;
obtaining the plurality of images of the object;
automatically rearranging the plurality of images into at least one sequence, the at least one sequence including ordered images from a plurality of viewing angles of the object substantially evenly distributed around 360 degrees; and
automatically determining whether to add at least one hotspot to at least one image in the at least one sequence.
For claim 35, Examiner believes this claim should be amended in the following manner:
A system for generating an interactive rotatable 360-degree presentation of a first object, the system comprising:
an interface configured to:
obtain data describing the first object, the data describing the first object including information about a plurality of images of the first object to be obtained and converted into the interactive rotatable 360-degree presentation of the first object, and additional information about the first object; and
obtain the plurality of images of the first object; and
a processor configured to:
automatically merge the plurality of images into at least one interactive rotatable 360-degree view of the first object, the at least one interactive rotatable 360-degree view comprising at least one of an exterior view of the first object and an interior view of the first object;
display a first image of the plurality of images for the interactive rotatable 360-degree presentation;
receive a drag event; determine, responsive to the receiving, a drag distance of the drag event; determine, based on the drag distance, a display angle of the first object; and display, based on the display angle, a second image of the plurality of images for the interactive rotatable 360-degree presentation.
Appropriate correction is required.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the claims at issue are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the reference application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO internet Web site contains terminal disclaimer forms which may be used. Please visit http://www.uspto.gov/forms/. The filing date of the application will determine what form should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to http://www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Claims 16-22, 24-27 and 29-31 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 9,865,069 in view of Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1). Claim 23 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 9,865,069 in view of Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1) and Lanahan et al. (U.S. Patent Application 2010/0005417 A1). Claim 28 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 9,865,069 in view of Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1) and Yamamoto (U.S. Patent Application Publication 2009/0244314 A1). Claims 32-34 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 9,865,069. Claim 35 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 9,865,069 in view of Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1) and Zhang et al. (U.S. Patent Application Publication 2014/0320537 A1).
The following is a claim comparison of claims 16-35 of the instant application and claim 1 of U.S. Patent 9,865,069.
Application No. 18,732,900
U.S. Patent 9,865,069
16. A system for generating an interactive rotatable 360-degree presentation of an object, the system comprising: an interface configured to:
obtain data describing the object, and
obtain a plurality of images of the object; and
a processor configured to: automatically rearrange the plurality of images into at least one sequence, the at least one sequence including ordered images from a plurality of viewing angles of the object substantially evenly distributed around 360 degrees, and
automatically determine whether to add at least one hotspot to at least one image in the at least one sequence.
1. A computer-implemented method for generating an interactive rotatable 360-degree presentation of an object, the method comprising:
obtaining data describing the object and at least one other object contained in a record including a plurality of data arranged in rows and columns, each row corresponding to a different object, the data including (i) information about a plurality of pre-existing images of the object to be obtained and converted into the interactive rotatable 360-degree presentation of the object, and (ii) additional information about the object, the additional information about the object including at least one of a vehicle make, a vehicle model, a vehicle identification number (VIN), or a vehicle body style, wherein obtaining the data describing the object includes (i) determining which columns of the plurality of data include information about the plurality of pre-existing images of the object to be obtained, (ii) determining which columns of the plurality of data include the additional information about the object, and (iii) determining which row of the plurality of data corresponds to the object by comparing data contained in the columns that include the additional information about the object with predetermined criteria;
automatically obtaining the plurality of pre-existing images of the object from a source separate from the source of the data describing the object, the source of the plurality of pre-existing images of the object being specified by the information about the plurality of pre-existing images of the object, the plurality of pre-existing images being initially arranged in a first sequence;
automatically resizing the plurality of pre-existing images such that the pre-existing images are of a common size;
automatically rearranging the plurality of pre-existing images from the first sequence into a second sequence and a third sequence, the second sequence and the third sequence being different from the first sequence and different from each other, the second sequence and the third sequence each including ordered pre-existing images from a plurality of viewing angles of the object substantially evenly distributed around 360 degrees, by (i) determining the order for the pre-existing images based on predetermined information corresponding to the object including a notation describing which images from the plurality of pre-existing images should be included in the second sequence and the third sequence and in which order, (ii) selecting pre-existing images from the plurality of pre-existing images that correspond to an exterior view of the object for inclusion in the second sequence, and (iii) selecting pre-existing images from the plurality of pre-existing images that correspond to an interior view of the object for inclusion in the third sequence;
automatically determining whether to add at least one hotspot to at least one pre-existing image in the second sequence or the third sequence, and if the at least one hotspot is to be added, automatically adding the at least one hotspot to the at least one pre-existing image, the hotspot being associated with a separate media element and enabling access to the media element when selected by a viewer of the interactive rotatable 360-degree presentation of the object, by (i) determining a position for the hotspot on each of the plurality of pre-existing images based on predetermined information corresponding to the object, (ii) determining the separate media element based on predetermined information corresponding to the object, the separate media element being any of text, an additional image, a video, a web page link, and an additional interactive rotatable 360-degree presentation, and (iii) adding the hotspot if the additional information about the object meets certain predetermined criteria;
automatically merging the ordered pre-existing images of the second sequence including exterior images into an exterior 360-degree view of the object; automatically merging the ordered pre-existing images of the second sequence including interior images into an interior 360-degree view of the object; and automatically merging the exterior 360-degree view and the interior 360-degree view into an interactive rotatable 360-degree presentation of the object.
17
1
18
1
19
1
20
1
21
1
22
1
23. The system of claim 16, wherein the at least one hotspot automatically follows an appropriate point on the interactive rotatable 360-degree presentation as the interactive rotatable 360-degree presentation rotates.
1
24
1
25
1
26
1
27
1
28. The system of claim 16, wherein the plurality of images comprises an exterior image of the object, an interior image of the object, and a close-up image of the object.
1
29
1
30
1
31
1
32. A computer implemented method, the method comprising:
obtaining data describing an object, the data describing the first object including information about a plurality of images of the object to be obtained and converted into an interactive rotatable 360-degree presentation of the object;
obtaining the plurality of images of the object;
automatically rearranging the plurality of images into at least one sequence, the at least one sequence including ordered images from a plurality of viewing angles of the object substantially evenly distributed around 360 degrees; and
automatically determining whether to add at least one hotspot to at least one image in the at least one sequence.
1. A computer-implemented method for generating an interactive rotatable 360-degree presentation of an object, the method comprising:
obtaining data describing the object and at least one other object contained in a record including a plurality of data arranged in rows and columns, each row corresponding to a different object, the data including (i) information about a plurality of pre-existing images of the object to be obtained and converted into the interactive rotatable 360-degree presentation of the object, and (ii) additional information about the object, the additional information about the object including at least one of a vehicle make, a vehicle model, a vehicle identification number (VIN), or a vehicle body style, wherein obtaining the data describing the object includes (i) determining which columns of the plurality of data include information about the plurality of pre-existing images of the object to be obtained, (ii) determining which columns of the plurality of data include the additional information about the object, and (iii) determining which row of the plurality of data corresponds to the object by comparing data contained in the columns that include the additional information about the object with predetermined criteria;
automatically obtaining the plurality of pre-existing images of the object from a source separate from the source of the data describing the object, the source of the plurality of pre-existing images of the object being specified by the information about the plurality of pre-existing images of the object, the plurality of pre-existing images being initially arranged in a first sequence;
automatically resizing the plurality of pre-existing images such that the pre-existing images are of a common size;
automatically rearranging the plurality of pre-existing images from the first sequence into a second sequence and a third sequence, the second sequence and the third sequence being different from the first sequence and different from each other, the second sequence and the third sequence each including ordered pre-existing images from a plurality of viewing angles of the object substantially evenly distributed around 360 degrees, by (i) determining the order for the pre-existing images based on predetermined information corresponding to the object including a notation describing which images from the plurality of pre-existing images should be included in the second sequence and the third sequence and in which order, (ii) selecting pre-existing images from the plurality of pre-existing images that correspond to an exterior view of the object for inclusion in the second sequence, and (iii) selecting pre-existing images from the plurality of pre-existing images that correspond to an interior view of the object for inclusion in the third sequence;
automatically determining whether to add at least one hotspot to at least one pre-existing image in the second sequence or the third sequence, and if the at least one hotspot is to be added, automatically adding the at least one hotspot to the at least one pre-existing image, the hotspot being associated with a separate media element and enabling access to the media element when selected by a viewer of the interactive rotatable 360-degree presentation of the object, by (i) determining a position for the hotspot on each of the plurality of pre-existing images based on predetermined information corresponding to the object, (ii) determining the separate media element based on predetermined information corresponding to the object, the separate media element being any of text, an additional image, a video, a web page link, and an additional interactive rotatable 360-degree presentation, and (iii) adding the hotspot if the additional information about the object meets certain predetermined criteria;
automatically merging the ordered pre-existing images of the second sequence including exterior images into an exterior 360-degree view of the object; automatically merging the ordered pre-existing images of the second sequence including interior images into an interior 360-degree view of the object; and automatically merging the exterior 360-degree view and the interior 360-degree view into an interactive rotatable 360-degree presentation of the object.
33
1
34
1
35. A system for generating an interactive rotatable 360-degree presentation of a first object, the system comprising: an interface configured to:
obtain data describing the first object, the data including information about a plurality of images of the first object to be obtained and converted into the interactive rotatable 360-degree presentation of the first object, and additional information about the first object; and
obtain the plurality of images of the first object; and
a processor configured to: automatically merge the plurality of images into at least one interactive rotatable 360-degree view of the first object, the at least one interactive rotatable 360-degree view comprising at least one of an exterior view of the first object and an interior view of the first object;
display a first image of the plurality of images for the interactive rotatable 360-degree presentation; receive a drag event; determine, responsive to the receiving, a drag distance of the drag event; determine, based on the drag distance, a display angle of the first object; and display, based on the display angle, a second image of the plurality of images for the interactive rotatable 360-degree presentation.
1. A computer-implemented method for generating an interactive rotatable 360-degree presentation of an object, the method comprising:
obtaining data describing the object and at least one other object contained in a record including a plurality of data arranged in rows and columns, each row corresponding to a different object, the data including (i) information about a plurality of pre-existing images of the object to be obtained and converted into the interactive rotatable 360-degree presentation of the object, and (ii) additional information about the object, the additional information about the object including at least one of a vehicle make, a vehicle model, a vehicle identification number (VIN), or a vehicle body style, wherein obtaining the data describing the object includes (i) determining which columns of the plurality of data include information about the plurality of pre-existing images of the object to be obtained, (ii) determining which columns of the plurality of data include the additional information about the object, and (iii) determining which row of the plurality of data corresponds to the object by comparing data contained in the columns that include the additional information about the object with predetermined criteria;
automatically obtaining the plurality of pre-existing images of the object from a source separate from the source of the data describing the object, the source of the plurality of pre-existing images of the object being specified by the information about the plurality of pre-existing images of the object, the plurality of pre-existing images being initially arranged in a first sequence;
automatically resizing the plurality of pre-existing images such that the pre-existing images are of a common size;
automatically rearranging the plurality of pre-existing images from the first sequence into a second sequence and a third sequence, the second sequence and the third sequence being different from the first sequence and different from each other, the second sequence and the third sequence each including ordered pre-existing images from a plurality of viewing angles of the object substantially evenly distributed around 360 degrees, by (i) determining the order for the pre-existing images based on predetermined information corresponding to the object including a notation describing which images from the plurality of pre-existing images should be included in the second sequence and the third sequence and in which order, (ii) selecting pre-existing images from the plurality of pre-existing images that correspond to an exterior view of the object for inclusion in the second sequence, and (iii) selecting pre-existing images from the plurality of pre-existing images that correspond to an interior view of the object for inclusion in the third sequence;
automatically determining whether to add at least one hotspot to at least one pre-existing image in the second sequence or the third sequence, and if the at least one hotspot is to be added, automatically adding the at least one hotspot to the at least one pre-existing image, the hotspot being associated with a separate media element and enabling access to the media element when selected by a viewer of the interactive rotatable 360-degree presentation of the object, by (i) determining a position for the hotspot on each of the plurality of pre-existing images based on predetermined information corresponding to the object, (ii) determining the separate media element based on predetermined information corresponding to the object, the separate media element being any of text, an additional image, a video, a web page link, and an additional interactive rotatable 360-degree presentation, and (iii) adding the hotspot if the additional information about the object meets certain predetermined criteria;
automatically merging the ordered pre-existing images of the second sequence including exterior images into an exterior 360-degree view of the object; automatically merging the ordered pre-existing images of the second sequence including interior images into an interior 360-degree view of the object; and automatically merging the exterior 360-degree view and the interior 360-degree view into an interactive rotatable 360-degree presentation of the object.
Claims 16-22, 24-27 and 29-31 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 9,865,069 in view of Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1). Claim 23 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 9,865,069 in view of Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1) and Lanahan et al. (U.S. Patent Application 2010/0005417 A1). Claim 28 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 9,865,069 in view of Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1) and Yamamoto (U.S. Patent Application Publication 2009/0244314 A1). Claims 32-34 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 9,865,069. Claim 35 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 9,865,069 in view of Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1) and Zhang et al. (U.S. Patent Application Publication 2014/0320537 A1).
For independent claim 16, claim 1 of U.S. Patent 9,865,069 does not disclose a system comprising an interface and a processor. However, these limitations are well-known in the art as disclosed in Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1). It would have been obvious to apply the use of removable media in combination with a processor to appropriately implement the functions of a computer system in presenting a 360 degree view of an object (page 2/par. 18 and 22 and page 4/par. 60) as disclosed in Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1). Claim 1 of U.S. Patent 9,865,069 otherwise discloses a method of steps corresponding to the functions of the system of claim 16 as shown in the claim chart above. Therefore, claim 16 is not patentably distinct from claim 1 of U.S. Patent 9,865,069.
For dependent claims 17-22, 24-27 and 29-31, claim 1 of U.S. Patent 9,865,069 mirrors and recites the limitations of claims 17-22, 24-27 and 29-31 as shown in the claim chart above. Therefore, claims 17-22, 24-27 and 29-31 are not patentably distinct from claim 1 of U.S. Patent 9,865,069.
For dependent claim 23, claim 1 of U.S. Patent 9,865,069 does not disclose a hotspot is associated with a point. However, these limitations are well-known in the art as disclosed in Lanahan et al. (U.S. Patent Application 2010/0005417 A1). It would have been obvious to apply to use of the designation of a point in media for the addition of a hotspot and an associated media element for access by a user (page 3/par. 37) as taught in Lanahan et al. (U.S. Patent Application 2010/0005417 A1) so that the hotspot automatically follows the point as the interactive rotatable 360-degree presentation rotates. Therefore, claim 23 is not patentably distinct from claim 1 of U.S. Patent 9,865,069.
For dependent claim 28, claim 1 of U.S. Patent 9,865,069 does not disclose a close-up image. However, these limitations are well-known in the art as disclosed in Yamamoto (U.S. Patent Application Publication 2009/0244314 A1). It would be obvious to apply the use of an arrangement of images including a close-up image to present an image at a greater focus (page 3/par. 45) as taught in Yamamoto (U.S. Patent Application Publication 2009/0244314 A1). Therefore, claim 28 is not patentably distinct from claim 1 of U.S. Patent 9,865,069.
For independent claim 32, claim 1 of U.S. Patent 9,865,069 anticipates and discloses the same limitations of claim 32. Therefore, claim 32 is not patentably distinct from claim 1 of U.S. Patent 9,865,069.
For dependent claims 33-34, claim 1 of U.S. Patent 9,865,069 mirrors and recites the limitations of claims 33-34 as shown in the claim chart above. Therefore, claims 33-34 are not patentably distinct from claim 1 of U.S. Patent 9,865,069.
For independent claim 35, claim 1 of U.S. Patent 9,865,069 does not disclose a system comprising an interface and a processor for displaying a first image; receiving a drag event; determining responsive to the receiving, a drag distance of the drag event; determining, based on the drag distance, a display angle of a first object; and displaying, based on the display angle, a second image. However, these limitations are well-known in the art as disclosed in Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1) and Zhang et al. (U.S. Patent Application Publication 2014/0320537 A1). It would have been obvious to apply the use of removable media in combination with a processor to appropriately implement the functions of a computer system in presenting a 360 degree view of an object (page 2/par. 18 and 22 and page 4/par. 60) for displaying a first image of a plurality of image, determining a user dragging event for selecting the first image and determining an angle of rotation from the dragging event to replace the first image with a second image of the plurality of images corresponding to the determined angle of rotation for appropriately rotating an object (page 12/par. 226) where each image of the presentation can have a plurality of hot spots such that each hot spot may be selected by a viewer to access a corresponding separate media file so that the first image and the second image corresponding to the determined angle of rotation may present hot spots for selection by the viewer to access a corresponding separate media file (page 4/par. 62, page 11/par. 192-193 and page 12/par. 226) as disclosed in Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1). It would have been further obvious to apply the determination of a drag distance as a change in position from an initial position for a drag event to appropriately calculate a rotation angle for rotating a viewing angle of an object for display (page 3/par. 40) as disclosed in Zhang et al. (U.S. Patent Application Publication 2014/0320537 A1). Claim 1 of U.S. Patent 9,865,069 otherwise discloses a method of steps corresponding to the method of claim 35 as shown in the claim chart above. Therefore, claim 35 is not patentably distinct from claim 1 of U.S. Patent 9,865,069.
Claims 16-22, 24-27 and 29-31 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 10,332,295 in view of Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1). Claim 23 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 10,332,295 in view of Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1) and Lanahan et al. (U.S. Patent Application 2010/0005417 A1). Claim 28 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 10,332,295 in view of Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1) and Yamamoto (U.S. Patent Application Publication 2009/0244314 A1). Claims 32-34 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 10,332,295. Claim 35 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 10,332,295 in view of Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1) and Zhang et al. (U.S. Patent Application Publication 2014/0320537 A1).
The following is a claim comparison of claims 16-35 of the instant application and claim 1 of U.S. Patent 10,332,295.
Application No. 18,732,900
U.S. Patent 10,332,295
16. A system for generating an interactive rotatable 360-degree presentation of an object, the system comprising: an interface configured to:
obtain data describing the object, and
obtain a plurality of images of the object; and
a processor configured to: automatically rearrange the plurality of images into at least one sequence, the at least one sequence including ordered images from a plurality of viewing angles of the object substantially evenly distributed around 360 degrees, and
automatically determine whether to add at least one hotspot to at least one image in the at least one sequence.
1. A computer-implemented method for generating an interactive rotatable 360-degree presentation of an object, the method comprising:
obtaining data describing the object from a first source, the data being contained in a record and being arranged in rows and columns, each of the rows corresponding to respective different objects, the data describing the object including (i) information about a plurality of images of the object to be obtained and converted into the interactive rotatable 360-degree presentation of the object, and (ii) additional information about the object, the additional information about the object including at least one of a vehicle make, a vehicle model, a vehicle identification number (VIN), or a vehicle body style, wherein obtaining the data describing the object includes (i) determining which columns of the plurality of data include the information about the plurality of images of the object to be obtained, (ii) determining which columns of the plurality of data include the additional information about the object, and (iii) determining which row of the plurality of data corresponds to the object by comparing data contained in the columns that include the additional information about the object with predetermined first criteria;
automatically obtaining the plurality of images of the object from a second source separate from the first source of the data describing the object, the second source of the plurality of images of the object being specified by the information about the plurality of images of the object;
automatically resizing the plurality of images such that the plurality of images are of a common size;
automatically rearranging the plurality of images into at least a first sequence and a second sequence, both the first sequence and the second sequence including ordered images from a plurality of viewing angles of the object substantially evenly distributed around 360 degrees, by (i) determining the order for the ordered images based on predetermined information corresponding to the object including a notation describing which images from the plurality of images should be included in the first sequence and the second sequence and in which order, (ii) selecting images from the plurality of images that correspond to an exterior view of the object for inclusion in the first sequence, and (iii) selecting images from the plurality of images that correspond to an interior view of the object for inclusion in the second sequence;
automatically determining whether to add at least one hotspot to at least one image of the plurality of images, and if the at least one hotspot is to be added, automatically adding the at least one hotspot to the at least one image, the at least one hotspot being associated with a separate media element and enabling access to the separate media element when selected by a viewer of the interactive rotatable 360-degree presentation of the object, by (i) determining a position for the at least one hotspot on the at least one image based on the predetermined information corresponding to the object, (ii) determining the separate media element based on the predetermined information corresponding to the object, the separate media element being any of text, an additional image, a video, a web page link, and an additional interactive rotatable 360-degree presentation, and (iii) adding the at least one hotspot if the additional information about the object meets certain predetermined second criteria;
automatically merging the ordered images of the first sequence including exterior images into an exterior 360-degree view of the object; automatically merging the ordered images of the second sequence including interior images into an interior 360-degree view of the object; and automatically merging the exterior 360-degree view of the object and the interior 360-degree view of the object into the interactive rotatable 360-degree presentation of the object.
17
1
18
1
19
1
20
1
21
1
22
1
23. The system of claim 16, wherein the at least one hotspot automatically follows an appropriate point on the interactive rotatable 360-degree presentation as the interactive rotatable 360-degree presentation rotates.
1
24
1
25
1
26
1
27
1
28. The system of claim 16, wherein the plurality of images comprises an exterior image of the object, an interior image of the object, and a close-up image of the object.
1
29
1
30
1
31
1
32. A computer implemented method, the method comprising:
obtaining data describing an object, the data describing the first object including information about a plurality of images of the object to be obtained and converted into an interactive rotatable 360-degree presentation of the object;
obtaining the plurality of images of the object;
automatically rearranging the plurality of images into at least one sequence, the at least one sequence including ordered images from a plurality of viewing angles of the object substantially evenly distributed around 360 degrees; and
automatically determining whether to add at least one hotspot to at least one image in the at least one sequence.
1. A computer-implemented method for generating an interactive rotatable 360-degree presentation of an object, the method comprising:
obtaining data describing the object from a first source, the data being contained in a record and being arranged in rows and columns, each of the rows corresponding to respective different objects, the data describing the object including (i) information about a plurality of images of the object to be obtained and converted into the interactive rotatable 360-degree presentation of the object, and (ii) additional information about the object, the additional information about the object including at least one of a vehicle make, a vehicle model, a vehicle identification number (VIN), or a vehicle body style, wherein obtaining the data describing the object includes (i) determining which columns of the plurality of data include the information about the plurality of images of the object to be obtained, (ii) determining which columns of the plurality of data include the additional information about the object, and (iii) determining which row of the plurality of data corresponds to the object by comparing data contained in the columns that include the additional information about the object with predetermined first criteria;
automatically obtaining the plurality of images of the object from a second source separate from the first source of the data describing the object, the second source of the plurality of images of the object being specified by the information about the plurality of images of the object;
automatically resizing the plurality of images such that the plurality of images are of a common size;
automatically rearranging the plurality of images into at least a first sequence and a second sequence, both the first sequence and the second sequence including ordered images from a plurality of viewing angles of the object substantially evenly distributed around 360 degrees, by (i) determining the order for the ordered images based on predetermined information corresponding to the object including a notation describing which images from the plurality of images should be included in the first sequence and the second sequence and in which order, (ii) selecting images from the plurality of images that correspond to an exterior view of the object for inclusion in the first sequence, and (iii) selecting images from the plurality of images that correspond to an interior view of the object for inclusion in the second sequence;
automatically determining whether to add at least one hotspot to at least one image of the plurality of images, and if the at least one hotspot is to be added, automatically adding the at least one hotspot to the at least one image, the at least one hotspot being associated with a separate media element and enabling access to the separate media element when selected by a viewer of the interactive rotatable 360-degree presentation of the object, by (i) determining a position for the at least one hotspot on the at least one image based on the predetermined information corresponding to the object, (ii) determining the separate media element based on the predetermined information corresponding to the object, the separate media element being any of text, an additional image, a video, a web page link, and an additional interactive rotatable 360-degree presentation, and (iii) adding the at least one hotspot if the additional information about the object meets certain predetermined second criteria;
automatically merging the ordered images of the first sequence including exterior images into an exterior 360-degree view of the object; automatically merging the ordered images of the second sequence including interior images into an interior 360-degree view of the object; and automatically merging the exterior 360-degree view of the object and the interior 360-degree view of the object into the interactive rotatable 360-degree presentation of the object.
33
1
34
1
35. A system for generating an interactive rotatable 360-degree presentation of a first object, the system comprising: an interface configured to:
obtain data describing the first object, the data including information about a plurality of images of the first object to be obtained and converted into the interactive rotatable 360-degree presentation of the first object, and additional information about the first object; and
obtain the plurality of images of the first object; and
a processor configured to: automatically merge the plurality of images into at least one interactive rotatable 360-degree view of the first object, the at least one interactive rotatable 360-degree view comprising at least one of an exterior view of the first object and an interior view of the first object;
display a first image of the plurality of images for the interactive rotatable 360-degree presentation; receive a drag event; determine, responsive to the receiving, a drag distance of the drag event; determine, based on the drag distance, a display angle of the first object; and display, based on the display angle, a second image of the plurality of images for the interactive rotatable 360-degree presentation.
1. A computer-implemented method for generating an interactive rotatable 360-degree presentation of an object, the method comprising:
obtaining data describing the object from a first source, the data being contained in a record and being arranged in rows and columns, each of the rows corresponding to respective different objects, the data describing the object including (i) information about a plurality of images of the object to be obtained and converted into the interactive rotatable 360-degree presentation of the object, and (ii) additional information about the object, the additional information about the object including at least one of a vehicle make, a vehicle model, a vehicle identification number (VIN), or a vehicle body style, wherein obtaining the data describing the object includes (i) determining which columns of the plurality of data include the information about the plurality of images of the object to be obtained, (ii) determining which columns of the plurality of data include the additional information about the object, and (iii) determining which row of the plurality of data corresponds to the object by comparing data contained in the columns that include the additional information about the object with predetermined first criteria;
automatically obtaining the plurality of images of the object from a second source separate from the first source of the data describing the object, the second source of the plurality of images of the object being specified by the information about the plurality of images of the object;
automatically resizing the plurality of images such that the plurality of images are of a common size;
automatically rearranging the plurality of images into at least a first sequence and a second sequence, both the first sequence and the second sequence including ordered images from a plurality of viewing angles of the object substantially evenly distributed around 360 degrees, by (i) determining the order for the ordered images based on predetermined information corresponding to the object including a notation describing which images from the plurality of images should be included in the first sequence and the second sequence and in which order, (ii) selecting images from the plurality of images that correspond to an exterior view of the object for inclusion in the first sequence, and (iii) selecting images from the plurality of images that correspond to an interior view of the object for inclusion in the second sequence;
automatically determining whether to add at least one hotspot to at least one image of the plurality of images, and if the at least one hotspot is to be added, automatically adding the at least one hotspot to the at least one image, the at least one hotspot being associated with a separate media element and enabling access to the separate media element when selected by a viewer of the interactive rotatable 360-degree presentation of the object, by (i) determining a position for the at least one hotspot on the at least one image based on the predetermined information corresponding to the object, (ii) determining the separate media element based on the predetermined information corresponding to the object, the separate media element being any of text, an additional image, a video, a web page link, and an additional interactive rotatable 360-degree presentation, and (iii) adding the at least one hotspot if the additional information about the object meets certain predetermined second criteria;
automatically merging the ordered images of the first sequence including exterior images into an exterior 360-degree view of the object; automatically merging the ordered images of the second sequence including interior images into an interior 360-degree view of the object; and automatically merging the exterior 360-degree view of the object and the interior 360-degree view of the object into the interactive rotatable 360-degree presentation of the object.
Claims 16-22, 24-27 and 29-31 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 10,332,295 in view of Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1). Claim 23 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 10,332,295 in view of Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1) and Lanahan et al. (U.S. Patent Application 2010/0005417 A1). Claim 28 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 10,332,295 in view of Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1) and Yamamoto (U.S. Patent Application Publication 2009/0244314 A1). Claims 32-34 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 10,332,295. Claim 35 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 10,332,295 in view of Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1) and Zhang et al. (U.S. Patent Application Publication 2014/0320537 A1).
For independent claim 16, claim 1 of U.S. Patent 10,332,295 does not disclose a system comprising an interface and a processor. However, these limitations are well-known in the art as disclosed in Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1). It would have been obvious to apply the use of removable media in combination with a processor to appropriately implement the functions of a computer system in presenting a 360 degree view of an object (page 2/par. 18 and 22 and page 4/par. 60) as disclosed in Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1). Claim 1 of U.S. Patent 10,332,295 otherwise discloses a method of steps corresponding to the functions of the system of claim 16 as shown in the claim chart above. Therefore, claim 16 is not patentably distinct from claim 1 of U.S. Patent 10,332,295.
For dependent claims 17-22, 24-27 and 29-31, claim 1 of U.S. Patent 10,332,295 mirrors and recites the limitations of claims 17-22, 24-27 and 29-31 as shown in the claim chart above. Therefore, claims 17-22, 24-27 and 29-31 are not patentably distinct from claim 1 of U.S. Patent 10,332,295.
For dependent claim 23, claim 1 of U.S. Patent 10,332,295 does not disclose a hotspot is associated with a point. However, these limitations are well-known in the art as disclosed in Lanahan et al. (U.S. Patent Application 2010/0005417 A1). It would have been obvious to apply to use of the designation of a point in media for the addition of a hotspot and an associated media element for access by a user (page 3/par. 37) as taught in Lanahan et al. (U.S. Patent Application 2010/0005417 A1) so that the hotspot automatically follows the point as the interactive rotatable 360-degree presentation rotates. Therefore, claim 23 is not patentably distinct from claim 1 of U.S. Patent 10,332,295.
For dependent claim 28, claim 1 of U.S. Patent 10,332,295 does not disclose a close-up image. However, these limitations are well-known in the art as disclosed in Yamamoto (U.S. Patent Application Publication 2009/0244314 A1). It would be obvious to apply the use of an arrangement of images including a close-up image to present an image at a greater focus (page 3/par. 45) as taught in Yamamoto (U.S. Patent Application Publication 2009/0244314 A1). Therefore, claim 28 is not patentably distinct from claim 1 of U.S. Patent 10,332,295.
For independent claim 32, claim 1 of U.S. Patent 10,332,295 anticipates and discloses the same limitations of claim 32. Therefore, claim 32 is not patentably distinct from claim 1 of U.S. Patent 10,332,295.
For dependent claims 33-34, claim 1 of U.S. Patent 10,332,295 mirrors and recites the limitations of claims 33-34 as shown in the claim chart above. Therefore, claims 33-34 are not patentably distinct from claim 1 of U.S. Patent 10,332,295.
For independent claim 35, claim 1 of U.S. Patent 10,332,295 does not disclose a system comprising an interface and a processor for displaying a first image; receiving a drag event; determining responsive to the receiving, a drag distance of the drag event; determining, based on the drag distance, a display angle of a first object; and displaying, based on the display angle, a second image. However, these limitations are well-known in the art as disclosed in Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1) and Zhang et al. (U.S. Patent Application Publication 2014/0320537 A1). It would have been obvious to apply the use of removable media in combination with a processor to appropriately implement the functions of a computer system in presenting a 360 degree view of an object (page 2/par. 18 and 22 and page 4/par. 60) for displaying a first image of a plurality of image, determining a user dragging event for selecting the first image and determining an angle of rotation from the dragging event to replace the first image with a second image of the plurality of images corresponding to the determined angle of rotation for appropriately rotating an object (page 12/par. 226) where each image of the presentation can have a plurality of hot spots such that each hot spot may be selected by a viewer to access a corresponding separate media file so that the first image and the second image corresponding to the determined angle of rotation may present hot spots for selection by the viewer to access a corresponding separate media file (page 4/par. 62, page 11/par. 192-193 and page 12/par. 226) as disclosed in Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1). It would have been further obvious to apply the determination of a drag distance as a change in position from an initial position for a drag event to appropriately calculate a rotation angle for rotating a viewing angle of an object for display (page 3/par. 40) as disclosed in Zhang et al. (U.S. Patent Application Publication 2014/0320537 A1). Claim 1 of U.S. Patent 10,332,295 otherwise discloses a method of steps corresponding to the method of claim 35 as shown in the claim chart above. Therefore, claim 35 is not patentably distinct from claim 1 of U.S. Patent 10,332,295.
Claims 16-22, 24-27 and 29-31 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 10,672,106 in view of Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1). Claim 23 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 10,672,106 in view of Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1) and Lanahan et al. (U.S. Patent Application 2010/0005417 A1). Claim 28 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 10,672,106 in view of Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1) and Yamamoto (U.S. Patent Application Publication 2009/0244314 A1). Claims 32-34 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 10,672,106. Claim 35 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 10,672,106 in view of Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1) and Zhang et al. (U.S. Patent Application Publication 2014/0320537 A1).
The following is a claim comparison of claims 16-35 of the instant application and claim 1 of U.S. Patent 10,672,106.
Application No. 18,732,900
U.S. Patent 10,672,106
16. A system for generating an interactive rotatable 360-degree presentation of an object, the system comprising: an interface configured to:
obtain data describing the object, and
obtain a plurality of images of the object; and
a processor configured to: automatically rearrange the plurality of images into at least one sequence, the at least one sequence including ordered images from a plurality of viewing angles of the object substantially evenly distributed around 360 degrees, and
automatically determine whether to add at least one hotspot to at least one image in the at least one sequence.
1. A computer-implemented method for generating an interactive rotatable 360-degree presentation of an object, the method comprising:
obtaining, from a first source, data describing the object contained in a record including a plurality of data arranged in rows and columns, each of the rows corresponding to respective different objects, the data describing the object including (i) information about a plurality of images of the object to be obtained and converted into the interactive rotatable 360-degree presentation of the object and (ii) additional information about the object, wherein obtaining the data describing the object includes (i) determining which columns of the plurality of data include the information about the plurality of images of the object to be obtained, (ii) determining which columns of the plurality of data include the additional information about the object, and (iii) determining which row of the plurality of data corresponds to the object by comparing data contained in the columns that include the additional information about the object with predetermined criteria;
automatically obtaining the plurality of images of the object from a second source separate from the first source, the second source of the plurality of images of the object being specified by the information about the plurality of images of the object;
automatically resizing the plurality of images such that the plurality of images are of a common size;
automatically rearranging the plurality of images into at least one sequence, the at least one sequence including ordered images from a plurality of viewing angles of the object distributed around 360 degrees, by (i) determining the order for the ordered images based on predetermined information corresponding to the object including a notation describing which images from the plurality of images should be included in the at least one sequence and in which order, (ii) selecting images from the plurality of images that correspond to an exterior view of the object for inclusion in the at least one sequence, and (iii) selecting images from the plurality of images that correspond to an interior view of the object for inclusion in an additional sequence of the at least one sequence;
automatically determining whether to add at least one hotspot to at least one image in the at least one sequence, and if the at least one hotspot is to be added, automatically adding the at least one hotspot to the at least one image, the at least one hotspot being associated with a separate media element and enabling access to the separate media element when selected by a viewer of the interactive rotatable 360-degree presentation of the object, by (i) determining a position for the at least one hotspot on the at least one image based on the predetermined information corresponding to the object, (ii) determining the separate media element based on the predetermined information corresponding to the object, the separate media element being at least one of a text, an additional image, a video, a web page link, and an additional rotatable 360-degree presentation, and (iii) adding the at least one hotspot if the additional information about the object meets certain criteria from the predetermined criteria;
automatically merging the ordered images of the at least one sequence including exterior images into an exterior 360-degree view of the object; automatically merging the ordered images of the additional sequence including interior images into an interior 360-degree view of the object; and automatically merging the exterior 360-degree view of the object and the interior 360-degree view of the object into the interactive rotatable 360-degree presentation of the object.
17
1
18
1
19
1
20
1
21
1
22
1
23. The system of claim 16, wherein the at least one hotspot automatically follows an appropriate point on the interactive rotatable 360-degree presentation as the interactive rotatable 360-degree presentation rotates.
1
24
1
25
1
26
1
27
1
28. The system of claim 16, wherein the plurality of images comprises an exterior image of the object, an interior image of the object, and a close-up image of the object.
1
29
1
30
1
31
1
32. A computer implemented method, the method comprising:
obtaining data describing an object, the data describing the first object including information about a plurality of images of the object to be obtained and converted into an interactive rotatable 360-degree presentation of the object;
obtaining the plurality of images of the object;
automatically rearranging the plurality of images into at least one sequence, the at least one sequence including ordered images from a plurality of viewing angles of the object substantially evenly distributed around 360 degrees; and
automatically determining whether to add at least one hotspot to at least one image in the at least one sequence.
1. A computer-implemented method for generating an interactive rotatable 360-degree presentation of an object, the method comprising:
obtaining, from a first source, data describing the object contained in a record including a plurality of data arranged in rows and columns, each of the rows corresponding to respective different objects, the data describing the object including (i) information about a plurality of images of the object to be obtained and converted into the interactive rotatable 360-degree presentation of the object and (ii) additional information about the object, wherein obtaining the data describing the object includes (i) determining which columns of the plurality of data include the information about the plurality of images of the object to be obtained, (ii) determining which columns of the plurality of data include the additional information about the object, and (iii) determining which row of the plurality of data corresponds to the object by comparing data contained in the columns that include the additional information about the object with predetermined criteria;
automatically obtaining the plurality of images of the object from a second source separate from the first source, the second source of the plurality of images of the object being specified by the information about the plurality of images of the object;
automatically resizing the plurality of images such that the plurality of images are of a common size;
automatically rearranging the plurality of images into at least one sequence, the at least one sequence including ordered images from a plurality of viewing angles of the object distributed around 360 degrees, by (i) determining the order for the ordered images based on predetermined information corresponding to the object including a notation describing which images from the plurality of images should be included in the at least one sequence and in which order, (ii) selecting images from the plurality of images that correspond to an exterior view of the object for inclusion in the at least one sequence, and (iii) selecting images from the plurality of images that correspond to an interior view of the object for inclusion in an additional sequence of the at least one sequence;
automatically determining whether to add at least one hotspot to at least one image in the at least one sequence, and if the at least one hotspot is to be added, automatically adding the at least one hotspot to the at least one image, the at least one hotspot being associated with a separate media element and enabling access to the separate media element when selected by a viewer of the interactive rotatable 360-degree presentation of the object, by (i) determining a position for the at least one hotspot on the at least one image based on the predetermined information corresponding to the object, (ii) determining the separate media element based on the predetermined information corresponding to the object, the separate media element being at least one of a text, an additional image, a video, a web page link, and an additional rotatable 360-degree presentation, and (iii) adding the at least one hotspot if the additional information about the object meets certain criteria from the predetermined criteria;
automatically merging the ordered images of the at least one sequence including exterior images into an exterior 360-degree view of the object; automatically merging the ordered images of the additional sequence including interior images into an interior 360-degree view of the object; and automatically merging the exterior 360-degree view of the object and the interior 360-degree view of the object into the interactive rotatable 360-degree presentation of the object.
33
1
34
1
35. A system for generating an interactive rotatable 360-degree presentation of a first object, the system comprising: an interface configured to:
obtain data describing the first object, the data including information about a plurality of images of the first object to be obtained and converted into the interactive rotatable 360-degree presentation of the first object, and additional information about the first object; and
obtain the plurality of images of the first object; and
a processor configured to: automatically merge the plurality of images into at least one interactive rotatable 360-degree view of the first object, the at least one interactive rotatable 360-degree view comprising at least one of an exterior view of the first object and an interior view of the first object;
display a first image of the plurality of images for the interactive rotatable 360-degree presentation; receive a drag event; determine, responsive to the receiving, a drag distance of the drag event; determine, based on the drag distance, a display angle of the first object; and display, based on the display angle, a second image of the plurality of images for the interactive rotatable 360-degree presentation.
1. A computer-implemented method for generating an interactive rotatable 360-degree presentation of an object, the method comprising:
obtaining, from a first source, data describing the object contained in a record including a plurality of data arranged in rows and columns, each of the rows corresponding to respective different objects, the data describing the object including (i) information about a plurality of images of the object to be obtained and converted into the interactive rotatable 360-degree presentation of the object and (ii) additional information about the object, wherein obtaining the data describing the object includes (i) determining which columns of the plurality of data include the information about the plurality of images of the object to be obtained, (ii) determining which columns of the plurality of data include the additional information about the object, and (iii) determining which row of the plurality of data corresponds to the object by comparing data contained in the columns that include the additional information about the object with predetermined criteria;
automatically obtaining the plurality of images of the object from a second source separate from the first source, the second source of the plurality of images of the object being specified by the information about the plurality of images of the object;
automatically resizing the plurality of images such that the plurality of images are of a common size;
automatically rearranging the plurality of images into at least one sequence, the at least one sequence including ordered images from a plurality of viewing angles of the object distributed around 360 degrees, by (i) determining the order for the ordered images based on predetermined information corresponding to the object including a notation describing which images from the plurality of images should be included in the at least one sequence and in which order, (ii) selecting images from the plurality of images that correspond to an exterior view of the object for inclusion in the at least one sequence, and (iii) selecting images from the plurality of images that correspond to an interior view of the object for inclusion in an additional sequence of the at least one sequence;
automatically determining whether to add at least one hotspot to at least one image in the at least one sequence, and if the at least one hotspot is to be added, automatically adding the at least one hotspot to the at least one image, the at least one hotspot being associated with a separate media element and enabling access to the separate media element when selected by a viewer of the interactive rotatable 360-degree presentation of the object, by (i) determining a position for the at least one hotspot on the at least one image based on the predetermined information corresponding to the object, (ii) determining the separate media element based on the predetermined information corresponding to the object, the separate media element being at least one of a text, an additional image, a video, a web page link, and an additional rotatable 360-degree presentation, and (iii) adding the at least one hotspot if the additional information about the object meets certain criteria from the predetermined criteria;
automatically merging the ordered images of the at least one sequence including exterior images into an exterior 360-degree view of the object; automatically merging the ordered images of the additional sequence including interior images into an interior 360-degree view of the object; and automatically merging the exterior 360-degree view of the object and the interior 360-degree view of the object into the interactive rotatable 360-degree presentation of the object.
Claims 16-22, 24-27 and 29-31 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 10,672,106 in view of Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1). Claim 23 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 10,672,106 in view of Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1) and Lanahan et al. (U.S. Patent Application 2010/0005417 A1). Claim 28 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 10,672,106 in view of Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1) and Yamamoto (U.S. Patent Application Publication 2009/0244314 A1). Claims 32-34 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 10,672,106. Claim 35 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 10,672,106 in view of Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1) and Zhang et al. (U.S. Patent Application Publication 2014/0320537 A1).
For independent claim 16, claim 1 of U.S. Patent 10,672,106 does not disclose a system comprising an interface and a processor. However, these limitations are well-known in the art as disclosed in Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1). It would have been obvious to apply the use of removable media in combination with a processor to appropriately implement the functions of a computer system in presenting a 360 degree view of an object (page 2/par. 18 and 22 and page 4/par. 60) as disclosed in Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1). Claim 1 of U.S. Patent 10,672,106 otherwise discloses a method of steps corresponding to the functions of the system of claim 16 as shown in the claim chart above. Therefore, claim 16 is not patentably distinct from claim 1 of U.S. Patent 10,672,106.
For dependent claims 17-22, 24-27 and 29-31, claim 1 of U.S. Patent 10,672,106 mirrors and recites the limitations of claims 17-22, 24-27 and 29-31 as shown in the claim chart above. Therefore, claims 17-22, 24-27 and 29-31 are not patentably distinct from claim 1 of U.S. Patent 10,672,106.
For dependent claim 23, claim 1 of U.S. Patent 10,672,106 does not disclose a hotspot is associated with a point. However, these limitations are well-known in the art as disclosed in Lanahan et al. (U.S. Patent Application 2010/0005417 A1). It would have been obvious to apply to use of the designation of a point in media for the addition of a hotspot and an associated media element for access by a user (page 3/par. 37) as taught in Lanahan et al. (U.S. Patent Application 2010/0005417 A1) so that the hotspot automatically follows the point as the interactive rotatable 360-degree presentation rotates. Therefore, claim 23 is not patentably distinct from claim 1 of U.S. Patent 10,672,106.
For dependent claim 28, claim 1 of U.S. Patent 10,672,106 does not disclose a close-up image. However, these limitations are well-known in the art as disclosed in Yamamoto (U.S. Patent Application Publication 2009/0244314 A1). It would be obvious to apply the use of an arrangement of images including a close-up image to present an image at a greater focus (page 3/par. 45) as taught in Yamamoto (U.S. Patent Application Publication 2009/0244314 A1). Therefore, claim 28 is not patentably distinct from claim 1 of U.S. Patent 10,672,106.
For independent claim 32, claim 1 of U.S. Patent 10,672,106 anticipates and discloses the same limitations of claim 32. Therefore, claim 32 is not patentably distinct from claim 1 of U.S. Patent 10,672,106.
For dependent claims 33-34, claim 1 of U.S. Patent 10,672,106 mirrors and recites the limitations of claims 33-34 as shown in the claim chart above. Therefore, claims 33-34 are not patentably distinct from claim 1 of U.S. Patent 10,672,106.
For independent claim 35, claim 1 of U.S. Patent 10,672,106 does not disclose a system comprising an interface and a processor for displaying a first image; receiving a drag event; determining responsive to the receiving, a drag distance of the drag event; determining, based on the drag distance, a display angle of a first object; and displaying, based on the display angle, a second image. However, these limitations are well-known in the art as disclosed in Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1) and Zhang et al. (U.S. Patent Application Publication 2014/0320537 A1). It would have been obvious to apply the use of removable media in combination with a processor to appropriately implement the functions of a computer system in presenting a 360 degree view of an object (page 2/par. 18 and 22 and page 4/par. 60) for displaying a first image of a plurality of image, determining a user dragging event for selecting the first image and determining an angle of rotation from the dragging event to replace the first image with a second image of the plurality of images corresponding to the determined angle of rotation for appropriately rotating an object (page 12/par. 226) where each image of the presentation can have a plurality of hot spots such that each hot spot may be selected by a viewer to access a corresponding separate media file so that the first image and the second image corresponding to the determined angle of rotation may present hot spots for selection by the viewer to access a corresponding separate media file (page 4/par. 62, page 11/par. 192-193 and page 12/par. 226) as disclosed in Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1). It would have been further obvious to apply the determination of a drag distance as a change in position from an initial position for a drag event to appropriately calculate a rotation angle for rotating a viewing angle of an object for display (page 3/par. 40) as disclosed in Zhang et al. (U.S. Patent Application Publication 2014/0320537 A1). Claim 1 of U.S. Patent 10,672,106 otherwise discloses a method of steps corresponding to the method of claim 35 as shown in the claim chart above. Therefore, claim 35 is not patentably distinct from claim 1 of U.S. Patent 10,672,106.
Claims 16-20, 22, 24-27 and 32-34 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 3, 5, 8-11 and 14 of U.S. Patent 10,672,169. Claim 23 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 10,672,169 in view of Lanahan et al. (U.S. Patent Application 2010/0005417 A1). Claim 28 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 10,672,169 in view of Yamamoto (U.S. Patent Application Publication 2009/0244314 A1). Claim 35 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 10,672,169 in view of Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1) and Zhang et al. (U.S. Patent Application Publication 2014/0320537 A1).
The following is a claim comparison of claims 16-20, 22-28 and 32-35 of the instant application and claims 1, 3, 5, 8-11 and 14 of U.S. Patent 10,672,169.
Application No. 18,732,900
U.S. Patent 10,672,169
16. A system for generating an interactive rotatable 360-degree presentation of an object, the system comprising: an interface configured to:
obtain data describing the object, and
obtain a plurality of images of the object; and
a processor configured to: automatically rearrange the plurality of images into at least one sequence, the at least one sequence including ordered images from a plurality of viewing angles of the object substantially evenly distributed around 360 degrees, and
automatically determine whether to add at least one hotspot to at least one image in the at least one sequence.
1. A system for generating an interactive rotatable 360-degree presentation of a first object, the system comprising: an interface configured to:
obtain data describing the first object, the data describing the first object contained in a record and arranged in at least one row and at least one column, the record including additional data describing at least one additional object, the data describing the first object including information about a plurality of arbitrarily arranged images of the first object to be obtained and converted into the interactive rotatable 360-degree presentation of the first object, and
additional information about the first object, the additional information about the first object including at least one of a vehicle make, a vehicle model, a vehicle identification number, or a vehicle body style; and
obtain the plurality of arbitrarily arranged images of the first object; and
a processor configured to: automatically rearrange the plurality of arbitrarily arranged images into at least one sequence, the at least one sequence including ordered images from a plurality of viewing angles of the first object distributed around 360 degrees;
automatically determine whether to add at least one hotspot to at least one image in the at least one sequence, and if the at least one hotspot is to be added, automatically add the at least one hotspot to the at least one image, the at least one hotspot being associated with a separate media element and enabling access to the separate media element when selected by a viewer of the interactive rotatable 360-degree presentation of the first object; and
automatically merge the ordered images of the at least one sequence into at least one interactive rotatable 360-degree view of the first object, the at least one interactive rotatable 360° view including at least one of an exterior view image of the first object and an interior view image of the first object.
17
1
18
1
19
1
20
3
22
5
23. The system of claim 16, wherein the at least one hotspot automatically follows an appropriate point on the interactive rotatable 360-degree presentation as the interactive rotatable 360-degree presentation rotates.
1
24
8
25
9
26
10
27
11
28. The system of claim 16, wherein the plurality of images comprises an exterior image of the object, an interior image of the object, and a close-up image of the object.
1
32. A computer implemented method, the method comprising:
obtaining data describing an object, the data describing the first object including information about a plurality of images of the object to be obtained and converted into an interactive rotatable 360-degree presentation of the object;
obtaining the plurality of images of the object;
automatically rearranging the plurality of images into at least one sequence, the at least one sequence including ordered images from a plurality of viewing angles of the object substantially evenly distributed around 360 degrees; and
automatically determining whether to add at least one hotspot to at least one image in the at least one sequence.
14. A computer implemented method, the method comprising:
obtaining data describing a first object, the data describing the first object contained in a record and arranged in at least one row and at least one column, the record including additional data describing at least one additional object, the data describing the first object including information about a plurality of arbitrarily arranged images of the first object to be obtained and converted into an interactive rotatable 360-degree presentation of the first object, and additional information about the first object, the additional information about the first object including at least one of a vehicle make, a vehicle model, a vehicle identification number, or a vehicle body style;
obtaining a plurality of arbitrarily arranged images of the first object;
automatically rearranging the plurality of arbitrarily arranged images into at least one sequence, the at least one sequence including ordered images from a plurality of viewing angles of the first object distributed around 360 degrees;
adding at least one hotspot to at least one image in the at least one sequence, the at least one hotspot being associated with a separate media element and enabling access to the separate media element when selected by a viewer of the interactive rotatable 360-degree presentation of the first object; and
automatically merging the ordered images of the at least one sequence into at least one interactive rotatable 360-degree view of the first object, the at least one interactive rotatable 360-degree view including at least one of an exterior view of the first object and an interior view of the first object.
33
14
34
14
35. A system for generating an interactive rotatable 360-degree presentation of a first object, the system comprising: an interface configured to:
obtain data describing the first object, the data including information about a plurality of images of the first object to be obtained and converted into the interactive rotatable 360-degree presentation of the first object, and additional information about the first object; and
obtain the plurality of images of the first object; and
processor configured to:
automatically merge the plurality of images into at least one interactive rotatable 360-degree view of the first object, the at least one interactive rotatable 360-degree view comprising at least one of an exterior view of the first object and an interior view of the first object;
display a first image of the plurality of images for the interactive rotatable 360-degree presentation; receive a drag event; determine, responsive to the receiving, a drag distance of the drag event; determine, based on the drag distance, a display angle of the first object; and display, based on the display angle, a second image of the plurality of images for the interactive rotatable 360-degree presentation.
1. A system for generating an interactive rotatable 360-degree presentation of a first object, the system comprising: an interface configured to:
obtain data describing the first object, the data describing the first object contained in a record and arranged in at least one row and at least one column, the record including additional data describing at least one additional object, the data describing the first object including information about a plurality of arbitrarily arranged images of the first object to be obtained and converted into the interactive rotatable 360-degree presentation of the first object, and
additional information about the first object, the additional information about the first object including at least one of a vehicle make, a vehicle model, a vehicle identification number, or a vehicle body style; and
obtain the plurality of arbitrarily arranged images of the first object; and
a processor configured to: automatically rearrange the plurality of arbitrarily arranged images into at least one sequence, the at least one sequence including ordered images from a plurality of viewing angles of the first object distributed around 360 degrees;
automatically determine whether to add at least one hotspot to at least one image in the at least one sequence, and if the at least one hotspot is to be added, automatically add the at least one hotspot to the at least one image, the at least one hotspot being associated with a separate media element and enabling access to the separate media element when selected by a viewer of the interactive rotatable 360-degree presentation of the first object; and
automatically merge the ordered images of the at least one sequence into at least one interactive rotatable 360-degree view of the first object, the at least one interactive rotatable 360° view including at least one of an exterior view image of the first object and an interior view image of the first object.
Claims 16-20, 22, 24-27 and 32-34 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 3, 5, 8-11 and 14 of U.S. Patent 10,672,169. Claim 23 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 10,672,169 in view of Lanahan et al. (U.S. Patent Application 2010/0005417 A1). Claim 28 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 10,672,169 in view of Yamamoto (U.S. Patent Application Publication 2009/0244314 A1). Claim 35 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 10,672,169 in view of Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1) and Zhang et al. (U.S. Patent Application Publication 2014/0320537 A1).
For independent claim 16, claim 1 of U.S. Patent 10,672,169 anticipates and discloses the same limitations of claim 1. Therefore, claim 16 is not patentably distinct from claim 1 of U.S. Patent 10,672,169.
For dependent claims 17-20, 22, 24-27, claims 1, 3, 5, 8-11 of U.S. Patent 10,672,169 mirrors and recites the limitations of claims 17-20, 22, 24-27 as shown in the claim chart above. Therefore, claims 17-20, 22, 24-27 are not patentably distinct from claims 1, 3, 5, 8-11 of U.S. Patent 10,672,169.
For dependent claim 23, claim 1 of U.S. Patent 10,672,169 does not disclose a hotspot is associated with a point. However, these limitations are well-known in the art as disclosed in Lanahan et al. (U.S. Patent Application 2010/0005417 A1). It would have been obvious to apply to use of the designation of a point in media for the addition of a hotspot and an associated media element for access by a user (page 3/par. 37) as taught in Lanahan et al. (U.S. Patent Application 2010/0005417 A1) so that the hotspot automatically follows the point as the interactive rotatable 360-degree presentation rotates. Therefore, claim 23 is not patentably distinct from claim 1 of U.S. Patent 10,672,169.
For dependent claim 28, claim 1 of U.S. Patent 10,672,169 does not disclose a close-up image. However, these limitations are well-known in the art as disclosed in Yamamoto (U.S. Patent Application Publication 2009/0244314 A1). It would be obvious to apply the use of an arrangement of images including a close-up image to present an image at a greater focus (page 3/par. 45) as taught in Yamamoto (U.S. Patent Application Publication 2009/0244314 A1). Therefore, claim 28 is not patentably distinct from claim 1 of U.S. Patent 10,672,169.
For independent claim 32, claim 14 of U.S. Patent 10,672,169 anticipates and discloses the same limitations of claim 32. Therefore, claim 32 is not patentably distinct from claim 14 of U.S. Patent 10,672,169.
For dependent claims 33-34, claim 14 of U.S. Patent 10,672,169 mirrors and recites the limitations of claims 33-34 as shown in the claim chart above. Therefore, claims 33-34 are not patentably distinct from claim 14 of U.S. Patent 10,672,169.
For independent claim 35, claim 1 of U.S. Patent 10,672,169 does not disclose a system comprising an interface and a processor for displaying a first image; receiving a drag event; determining responsive to the receiving, a drag distance of the drag event; determining, based on the drag distance, a display angle of a first object; and displaying, based on the display angle, a second image. However, these limitations are well-known in the art as disclosed in Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1) and Zhang et al. (U.S. Patent Application Publication 2014/0320537 A1). It would have been obvious to apply the use of removable media in combination with a processor to appropriately implement the functions of a computer system in presenting a 360 degree view of an object (page 2/par. 18 and 22 and page 4/par. 60) for displaying a first image of a plurality of image, determining a user dragging event for selecting the first image and determining an angle of rotation from the dragging event to replace the first image with a second image of the plurality of images corresponding to the determined angle of rotation for appropriately rotating an object (page 12/par. 226) where each image of the presentation can have a plurality of hot spots such that each hot spot may be selected by a viewer to access a corresponding separate media file so that the first image and the second image corresponding to the determined angle of rotation may present hot spots for selection by the viewer to access a corresponding separate media file (page 4/par. 62, page 11/par. 192-193 and page 12/par. 226) as disclosed in Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1). It would have been further obvious to apply the determination of a drag distance as a change in position from an initial position for a drag event to appropriately calculate a rotation angle for rotating a viewing angle of an object for display (page 3/par. 40) as disclosed in Zhang et al. (U.S. Patent Application Publication 2014/0320537 A1). Claim 1 of U.S. Patent 10,672,169 otherwise discloses a method of steps corresponding to the method of claim 35 as shown in the claim chart above. Therefore, claim 35 is not patentably distinct from claim 1 of U.S. Patent 10,672,169.
Claims 16-20, 22, 24-27, 29 and 32-34 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 2, 5, 7, 9-12 and 17 of U.S. Patent 10,810,778. Claim 23 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 10,810,778 in view of Lanahan et al. (U.S. Patent Application 2010/0005417 A1). Claim 28 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 10,810,778 in view of Yamamoto (U.S. Patent Application Publication 2009/0244314 A1). Claim 35 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 10,810,778 in view of Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1) and Zhang et al. (U.S. Patent Application Publication 2014/0320537 A1).
The following is a claim comparison of claims 16-20, 22-29 and 32-35 of the instant application and claims 1, 2, 5, 7, 9-12 and 17 of U.S. Patent 10,810,778.
Application No. 18,732,900
U.S. Patent 10,810,778
16. A system for generating an interactive rotatable 360-degree presentation of an object, the system comprising: an interface configured to:
obtain data describing the object, and
obtain a plurality of images of the object; and
a processor configured to: automatically rearrange the plurality of images into at least one sequence, the at least one sequence including ordered images from a plurality of viewing angles of the object substantially evenly distributed around 360 degrees, and
automatically determine whether to add at least one hotspot to at least one image in the at least one sequence.
1. A system for generating an interactive rotatable 360-degree presentation of a first object, the system comprising: an interface configured to:
obtain data describing the first object, the data describing the first object contained in a record and arranged in at least one row and at least one column, the record including additional data describing at least one additional object, the data describing the first object including information about a plurality of arranged images of the first object to be obtained and converted into the interactive rotatable 360-degree presentation of the first object, and additional information about the first object, the additional information about the first object including at least one of a vehicle make, a vehicle model, a vehicle identification number, or a vehicle body style; and
obtain, responsive to obtaining the data describing the first object, the plurality of arranged images of the first object; and
a processor configured to: automatically rearrange the plurality of arranged images into at least one sequence, the at least one sequence including ordered images from a plurality of viewing angles of the first object distributed around 360 degrees;
automatically determine whether to add at least one hotspot to at least one image in the at least one sequence, and if the at least one hotspot is to be added, automatically add the at least one hotspot to the at least one image, the at least one hotspot enabling access to a separate media element when selected by a viewer of the interactive rotatable 360-degree presentation of the first object; and
automatically merge the ordered images of the at least one sequence into at least one interactive rotatable 360-degree view of the first object, the at least one interactive rotatable 360-degree view comprising at least one of an exterior view of the first object and an interior view of the first object.
17
1
18
1
19
1
20
5
22
7
23. The system of claim 16, wherein the at least one hotspot automatically follows an appropriate point on the interactive rotatable 360-degree presentation as the interactive rotatable 360-degree presentation rotates.
1
24
9
25
10
26
11
27
12
28. The system of claim 16, wherein the plurality of images comprises an exterior image of the object, an interior image of the object, and a close-up image of the object.
1
29
2
32. A computer implemented method, the method comprising:
obtaining data describing an object, the data describing the first object including information about a plurality of images of the object to be obtained and converted into an interactive rotatable 360-degree presentation of the object;
obtaining the plurality of images of the object;
automatically rearranging the plurality of images into at least one sequence, the at least one sequence including ordered images from a plurality of viewing angles of the object substantially evenly distributed around 360 degrees; and
automatically determining whether to add at least one hotspot to at least one image in the at least one sequence.
17. A computer implemented method, the method comprising:
obtaining data describing a first object, the data describing the first object contained in a record and arranged in at least one row and at least one column, the record including additional data describing at least one additional object, the data describing the first object including information about a plurality of arranged images of the first object to be obtained and converted into an interactive rotatable 360-degree presentation of the first object, and additional information about the first object, the additional information about the first object including at least one of a vehicle make, a vehicle model, a vehicle identification number, or a vehicle body style;
obtaining, responsive to obtaining the data describing the first object, the plurality of arranged images of the first object;
automatically rearranging the plurality of arranged images into at least one sequence, the at least one sequence including ordered images from a plurality of viewing angles of the first object distributed around 360 degrees;
adding at least one hotspot to at least one image in the at least one sequence, the at least one hotspot enabling access to a separate media element when selected by a viewer of the interactive rotatable 360-degree presentation of the first object; and automatically merging the ordered images of the at least one sequence into at least one interactive rotatable 360-degree view of the first object, the at least one interactive rotatable 360-degree view including at least one of an exterior view of the first object and an interior view of the first object.
33
17
34
17
35. A system for generating an interactive rotatable 360-degree presentation of a first object, the system comprising: an interface configured to:
obtain data describing the first object, the data including information about a plurality of images of the first object to be obtained and converted into the interactive rotatable 360-degree presentation of the first object, and additional information about the first object; and
obtain the plurality of images of the first object; and
processor configured to:
automatically merge the plurality of images into at least one interactive rotatable 360-degree view of the first object, the at least one interactive rotatable 360-degree view comprising at least one of an exterior view of the first object and an interior view of the first object;
display a first image of the plurality of images for the interactive rotatable 360-degree presentation; receive a drag event; determine, responsive to the receiving, a drag distance of the drag event; determine, based on the drag distance, a display angle of the first object; and display, based on the display angle, a second image of the plurality of images for the interactive rotatable 360-degree presentation.
1. A system for generating an interactive rotatable 360-degree presentation of a first object, the system comprising: an interface configured to:
obtain data describing the first object, the data describing the first object contained in a record and arranged in at least one row and at least one column, the record including additional data describing at least one additional object, the data describing the first object including information about a plurality of arranged images of the first object to be obtained and converted into the interactive rotatable 360-degree presentation of the first object, and additional information about the first object, the additional information about the first object including at least one of a vehicle make, a vehicle model, a vehicle identification number, or a vehicle body style; and
obtain, responsive to obtaining the data describing the first object, the plurality of arranged images of the first object; and
a processor configured to: automatically rearrange the plurality of arranged images into at least one sequence, the at least one sequence including ordered images from a plurality of viewing angles of the first object distributed around 360 degrees;
automatically determine whether to add at least one hotspot to at least one image in the at least one sequence, and if the at least one hotspot is to be added, automatically add the at least one hotspot to the at least one image, the at least one hotspot enabling access to a separate media element when selected by a viewer of the interactive rotatable 360-degree presentation of the first object; and
automatically merge the ordered images of the at least one sequence into at least one interactive rotatable 360-degree view of the first object, the at least one interactive rotatable 360-degree view comprising at least one of an exterior view of the first object and an interior view of the first object.
Claims 16-20, 22, 24-27, 29 and 32-34 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 2, 5, 7, 9-12 and 17 of U.S. Patent 10,810,778. Claim 23 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 10,810,778 in view of Lanahan et al. (U.S. Patent Application 2010/0005417 A1). Claim 28 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 10,810,778 in view of Yamamoto (U.S. Patent Application Publication 2009/0244314 A1). Claim 35 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 10,810,778 in view of Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1) and Zhang et al. (U.S. Patent Application Publication 2014/0320537 A1).
For independent claim 16, claim 1 of U.S. Patent 10,810,778 anticipates and discloses the same limitations of claim 1. Therefore, claim 16 is not patentably distinct from claim 1 of U.S. Patent 10,810,778.
For dependent claims 17-20, 22, 24-27 and 29, claims 1, 2, 5, 7, 9-12 of U.S. Patent 10,810,778mirrors and recites the limitations of claims 17-20, 22, 24-27 and 29as shown in the claim chart above. Therefore, claims 17-20, 22, 24-27 and 29 are not patentably distinct from claims 1, 2, 5, 7, 9-12 of U.S. Patent 10,810,778.
For dependent claim 23, claim 1 of U.S. Patent 10,810,778 does not disclose a hotspot is associated with a point. However, these limitations are well-known in the art as disclosed in Lanahan et al. (U.S. Patent Application 2010/0005417 A1). It would have been obvious to apply to use of the designation of a point in media for the addition of a hotspot and an associated media element for access by a user (page 3/par. 37) as taught in Lanahan et al. (U.S. Patent Application 2010/0005417 A1) so that the hotspot automatically follows the point as the interactive rotatable 360-degree presentation rotates. Therefore, claim 23 is not patentably distinct from claim 1 of U.S. Patent 10,810,778.
For dependent claim 28, claim 1 of U.S. Patent 10,810,778 does not disclose a close-up image. However, these limitations are well-known in the art as disclosed in Yamamoto (U.S. Patent Application Publication 2009/0244314 A1). It would be obvious to apply the use of an arrangement of images including a close-up image to present an image at a greater focus (page 3/par. 45) as taught in Yamamoto (U.S. Patent Application Publication 2009/0244314 A1). Therefore, claim 28 is not patentably distinct from claim 1 of U.S. Patent 10,810,778.
For independent claim 32, claim 17 of U.S. Patent 10,810,778 anticipates and discloses the same limitations of claim 32. Therefore, claim 32 is not patentably distinct from claim 17 of U.S. Patent 10,810,778.
For dependent claims 33-34, claim 17 of U.S. Patent 10,810,778 mirrors and recites the limitations of claims 33-34 as shown in the claim chart above. Therefore, claims 33-34 are not patentably distinct from claim 17 of U.S. Patent 10,810,778.
For independent claim 35, claim 1 of U.S. Patent 10,810,778 does not disclose a system comprising an interface and a processor for displaying a first image; receiving a drag event; determining responsive to the receiving, a drag distance of the drag event; determining, based on the drag distance, a display angle of a first object; and displaying, based on the display angle, a second image. However, these limitations are well-known in the art as disclosed in Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1) and Zhang et al. (U.S. Patent Application Publication 2014/0320537 A1). It would have been obvious to apply the use of removable media in combination with a processor to appropriately implement the functions of a computer system in presenting a 360 degree view of an object (page 2/par. 18 and 22 and page 4/par. 60) for displaying a first image of a plurality of image, determining a user dragging event for selecting the first image and determining an angle of rotation from the dragging event to replace the first image with a second image of the plurality of images corresponding to the determined angle of rotation for appropriately rotating an object (page 12/par. 226) where each image of the presentation can have a plurality of hot spots such that each hot spot may be selected by a viewer to access a corresponding separate media file so that the first image and the second image corresponding to the determined angle of rotation may present hot spots for selection by the viewer to access a corresponding separate media file (page 4/par. 62, page 11/par. 192-193 and page 12/par. 226) as disclosed in Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1). It would have been further obvious to apply the determination of a drag distance as a change in position from an initial position for a drag event to appropriately calculate a rotation angle for rotating a viewing angle of an object for display (page 3/par. 40) as disclosed in Zhang et al. (U.S. Patent Application Publication 2014/0320537 A1). Claim 1 of U.S. Patent 10,810,778 otherwise discloses a method of steps corresponding to the method of claim 35 as shown in the claim chart above. Therefore, claim 35 is not patentably distinct from claim 1 of U.S. Patent 10,810,778.
Claims 16-20, 22, 24, 25, 27, 29 and 32-34 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 4, 6, 7 and 9-11 of U.S. Patent 10,853,985. Claim 23 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 10,853,985 in view of Lanahan et al. (U.S. Patent Application 2010/0005417 A1). Claim 28 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 10,853,985 in view of Yamamoto (U.S. Patent Application Publication 2009/0244314 A1). Claim 35 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 10,853,985 in view of Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1) and Zhang et al. (U.S. Patent Application Publication 2014/0320537 A1).
The following is a claim comparison of claims 16-20, 22-25, 27-29 and 32-35 of the instant application and claims 1, 4, 6, 7 and 9-11 of U.S. Patent 10,853,985.
Application No. 18,732,900
U.S. Patent 10,853,985
16. A system for generating an interactive rotatable 360-degree presentation of an object, the system comprising: an interface configured to:
obtain data describing the object, and
obtain a plurality of images of the object; and
a processor configured to: automatically rearrange the plurality of images into at least one sequence, the at least one sequence including ordered images from a plurality of viewing angles of the object substantially evenly distributed around 360 degrees, and
automatically determine whether to add at least one hotspot to at least one image in the at least one sequence.
1. A system for generating an interactive rotatable 360-degree presentation of a first object, the system comprising: an interface configured to:
obtain data describing the first object, the data describing the first object including information about a plurality of arranged images of the first object to be obtained and converted into the interactive rotatable 360-degree presentation of the first object, and additional information about the first object, the additional information about the first object including at least one of a vehicle make, a vehicle model, a vehicle identification number, or a vehicle body style; and
obtain, responsive to obtaining the data describing the first object, the plurality of arranged images of the first object; and
a processor configured to: automatically rearrange the plurality of arranged images into at least one sequence, the at least one sequence including ordered images from a plurality of viewing angles of the first object distributed around 360 degrees, wherein automatically rearranging the plurality of arranged images into the at least one sequence includes: selecting at least one image from the plurality of arranged images that corresponds to an exterior view of the first object for inclusion in a first sequence of the at least one sequence, and selecting at least one image from the plurality of arranged images that corresponds to an interior view of the first object for inclusion in a second sequence of the at least one sequence; and
automatically merge the ordered images of the at least one sequence into at least one interactive rotatable 360-degree view of the first object, the at least one interactive rotatable 360-degree view comprising at least one of the exterior view of the first object and the interior view of the first object.
6. The system of claim 1, wherein the processor is further configured to add at least one hotspot to at least one image in the at least one sequence, the at least one hotspot enabling access to a separate media element when selected by a viewer of the interactive rotatable 360-degree presentation of the first object.
17
1
18
6
19
6
20
4
22
7
23. The system of claim 16, wherein the at least one hotspot automatically follows an appropriate point on the interactive rotatable 360-degree presentation as the interactive rotatable 360-degree presentation rotates.
1
24
9
25
10
27
11
28. The system of claim 16, wherein the plurality of images comprises an exterior image of the object, an interior image of the object, and a close-up image of the object.
1
29
1
32. A computer implemented method, the method comprising:
obtaining data describing an object, the data describing the first object including information about a plurality of images of the object to be obtained and converted into an interactive rotatable 360-degree presentation of the object;
obtaining the plurality of images of the object;
automatically rearranging the plurality of images into at least one sequence, the at least one sequence including ordered images from a plurality of viewing angles of the object substantially evenly distributed around 360 degrees; and
automatically determining whether to add at least one hotspot to at least one image in the at least one sequence.
1. A system for generating an interactive rotatable 360-degree presentation of a first object, the system comprising: an interface configured to:
obtain data describing the first object, the data describing the first object including information about a plurality of arranged images of the first object to be obtained and converted into the interactive rotatable 360-degree presentation of the first object, and additional information about the first object, the additional information about the first object including at least one of a vehicle make, a vehicle model, a vehicle identification number, or a vehicle body style; and
obtain, responsive to obtaining the data describing the first object, the plurality of arranged images of the first object; and
a processor configured to: automatically rearrange the plurality of arranged images into at least one sequence, the at least one sequence including ordered images from a plurality of viewing angles of the first object distributed around 360 degrees, wherein automatically rearranging the plurality of arranged images into the at least one sequence includes: selecting at least one image from the plurality of arranged images that corresponds to an exterior view of the first object for inclusion in a first sequence of the at least one sequence, and selecting at least one image from the plurality of arranged images that corresponds to an interior view of the first object for inclusion in a second sequence of the at least one sequence; and
automatically merge the ordered images of the at least one sequence into at least one interactive rotatable 360-degree view of the first object, the at least one interactive rotatable 360-degree view comprising at least one of the exterior view of the first object and the interior view of the first object.
6. The system of claim 1, wherein the processor is further configured to add at least one hotspot to at least one image in the at least one sequence, the at least one hotspot enabling access to a separate media element when selected by a viewer of the interactive rotatable 360-degree presentation of the first object.
33
1
34
6
35. A system for generating an interactive rotatable 360-degree presentation of a first object, the system comprising: an interface configured to:
obtain data describing the first object, the data including information about a plurality of images of the first object to be obtained and converted into the interactive rotatable 360-degree presentation of the first object, and additional information about the first object; and
obtain the plurality of images of the first object; and
processor configured to:
automatically merge the plurality of images into at least one interactive rotatable 360-degree view of the first object, the at least one interactive rotatable 360-degree view comprising at least one of an exterior view of the first object and an interior view of the first object;
display a first image of the plurality of images for the interactive rotatable 360-degree presentation; receive a drag event; determine, responsive to the receiving, a drag distance of the drag event; determine, based on the drag distance, a display angle of the first object; and display, based on the display angle, a second image of the plurality of images for the interactive rotatable 360-degree presentation.
1. A system for generating an interactive rotatable 360-degree presentation of a first object, the system comprising: an interface configured to:
obtain data describing the first object, the data describing the first object including information about a plurality of arranged images of the first object to be obtained and converted into the interactive rotatable 360-degree presentation of the first object, and additional information about the first object, the additional information about the first object including at least one of a vehicle make, a vehicle model, a vehicle identification number, or a vehicle body style; and
obtain, responsive to obtaining the data describing the first object, the plurality of arranged images of the first object; and
a processor configured to: automatically rearrange the plurality of arranged images into at least one sequence, the at least one sequence including ordered images from a plurality of viewing angles of the first object distributed around 360 degrees, wherein automatically rearranging the plurality of arranged images into the at least one sequence includes: selecting at least one image from the plurality of arranged images that corresponds to an exterior view of the first object for inclusion in a first sequence of the at least one sequence, and selecting at least one image from the plurality of arranged images that corresponds to an interior view of the first object for inclusion in a second sequence of the at least one sequence; and
automatically merge the ordered images of the at least one sequence into at least one interactive rotatable 360-degree view of the first object, the at least one interactive rotatable 360-degree view comprising at least one of the exterior view of the first object and the interior view of the first object.
Claims 16-20, 22, 24, 25, 27, 29 and 32-34 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 4, 6, 7 and 9-11 of U.S. Patent 10,853,985. Claim 23 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 10,853,985 in view of Lanahan et al. (U.S. Patent Application 2010/0005417 A1). Claim 28 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 10,853,985 in view of Yamamoto (U.S. Patent Application Publication 2009/0244314 A1). Claim 35 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 10,853,985 in view of Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1) and Zhang et al. (U.S. Patent Application Publication 2014/0320537 A1).
For independent claim 16, claims 1 and 6 of U.S. Patent 10,853,985 anticipates and discloses the same limitations of claim 1. Therefore, claim 16 is not patentably distinct from claims 1 and 6 of U.S. Patent 10,853,985.
For dependent claims 17-20, 22, 24, 25, 27 and 29, claims 1, 4, 6, 7 and 9-11 of U.S. Patent 10,853,985 mirrors and recites the limitations of claims 17-20, 22, 24, 25, 27 and 29 as shown in the claim chart above. Therefore, claims 17-20, 22, 24, 25, 27 and 29 are not patentably distinct from claims 1, 4, 6, 7 and 9-11 of U.S. Patent 10,853,985.
For dependent claim 23, claim 1 of U.S. Patent 10,853,985 does not disclose a hotspot is associated with a point. However, these limitations are well-known in the art as disclosed in Lanahan et al. (U.S. Patent Application 2010/0005417 A1). It would have been obvious to apply to use of the designation of a point in media for the addition of a hotspot and an associated media element for access by a user (page 3/par. 37) as taught in Lanahan et al. (U.S. Patent Application 2010/0005417 A1) so that the hotspot automatically follows the point as the interactive rotatable 360-degree presentation rotates. Therefore, claim 23 is not patentably distinct from claim 1 of U.S. Patent 10,853,985.
For dependent claim 28, claim 1 of U.S. Patent 10,853,985 does not disclose a close-up image. However, these limitations are well-known in the art as disclosed in Yamamoto (U.S. Patent Application Publication 2009/0244314 A1). It would be obvious to apply the use of an arrangement of images including a close-up image to present an image at a greater focus (page 3/par. 45) as taught in Yamamoto (U.S. Patent Application Publication 2009/0244314 A1). Therefore, claim 28 is not patentably distinct from claim 1 of U.S. Patent 10,853,985.
For independent claim 32, claims 1 and 6 of U.S. Patent 10,853,985 anticipates and discloses the same limitations of claim 32. Therefore, claim 32 is not patentably distinct from claims 1 and 6 of U.S. Patent 10,853,985.
For dependent claims 33-34, claims 1 and 6 of U.S. Patent 10,853,985 mirrors and recites the limitations of claims 33-34 as shown in the claim chart above. Therefore, claims 33-34 are not patentably distinct from claims 1 and 6 of U.S. Patent 10,853,985.
For independent claim 35, claim 1 of U.S. Patent 10,853,985 does not disclose a system comprising an interface and a processor for displaying a first image; receiving a drag event; determining responsive to the receiving, a drag distance of the drag event; determining, based on the drag distance, a display angle of a first object; and displaying, based on the display angle, a second image. However, these limitations are well-known in the art as disclosed in Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1) and Zhang et al. (U.S. Patent Application Publication 2014/0320537 A1). It would have been obvious to apply the use of removable media in combination with a processor to appropriately implement the functions of a computer system in presenting a 360 degree view of an object (page 2/par. 18 and 22 and page 4/par. 60) for displaying a first image of a plurality of image, determining a user dragging event for selecting the first image and determining an angle of rotation from the dragging event to replace the first image with a second image of the plurality of images corresponding to the determined angle of rotation for appropriately rotating an object (page 12/par. 226) where each image of the presentation can have a plurality of hot spots such that each hot spot may be selected by a viewer to access a corresponding separate media file so that the first image and the second image corresponding to the determined angle of rotation may present hot spots for selection by the viewer to access a corresponding separate media file (page 4/par. 62, page 11/par. 192-193 and page 12/par. 226) as disclosed in Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1). It would have been further obvious to apply the determination of a drag distance as a change in position from an initial position for a drag event to appropriately calculate a rotation angle for rotating a viewing angle of an object for display (page 3/par. 40) as disclosed in Zhang et al. (U.S. Patent Application Publication 2014/0320537 A1). Claim 1 of U.S. Patent 10,853,985 otherwise discloses a method of steps corresponding to the method of claim 35 as shown in the claim chart above. Therefore, claim 35 is not patentably distinct from claim 1 of U.S. Patent 10,853,985.
Claims 16-22, 25-27 and 29-31 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 12,039,646 in view of Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1). Claim 23 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 12,039,646 in view of Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1) and Lanahan et al. (U.S. Patent Application 2010/0005417 A1). Claim 28 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 12,039,646 in view of Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1) and Yamamoto (U.S. Patent Application Publication 2009/0244314 A1). Claims 32-34 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 12,039,646. Claim 35 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 12,039,646 in view of Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1) and Zhang et al. (U.S. Patent Application Publication 2014/0320537 A1).
The following is a claim comparison of claims 16-23 and 25-35 of the instant application and claims 1 and 5-8 of U.S. Patent 12,039,646.
Application No. 18,732,900
U.S. Patent 12,039,646
16. A system for generating an interactive rotatable 360-degree presentation of an object, the system comprising: an interface configured to:
obtain data describing the object, and
obtain a plurality of images of the object; and
a processor configured to: automatically rearrange the plurality of images into at least one sequence, the at least one sequence including ordered images from a plurality of viewing angles of the object substantially evenly distributed around 360 degrees, and
automatically determine whether to add at least one hotspot to at least one image in the at least one sequence.
1. A computer-implemented method for generating an interactive rotatable 360-degree presentation of an object, the method comprising:
obtaining, from a first source, data describing the object, the data describing the object including (i) information about a plurality of images of the object to be obtained and converted into the interactive rotatable 360-degree presentation of the object, and (ii) additional information about the object;
automatically obtaining the plurality of images of the object;
automatically rearranging the plurality of images into at least one sequence, the at least one sequence including ordered images from a plurality of viewing angles of the object substantially evenly distributed around 360 degrees, wherein the at least one sequence is defined by a plurality of sequence strings which specify an order of the ordered images, wherein each sequence string of the plurality of sequence strings comprises one or more sub strings separated by spaces; determining, for each sequence string of the plurality of sequence strings, whether each substring of the one or more substrings begins with a character indicating reverse order; appending, for any substring beginning with the character indicating the reverse order, a range of indices in the reverse order to a list of indices corresponding to the ordered images;
automatically determining whether to add at least one hotspot to at least one image in the at least one sequence, and if the at least one hotspot is to be added, automatically adding the at least one hotspot to the at least one image, the at least one hotspot being associated with a separate media element and enabling access to the separate media element when selected by a viewer of the interactive rotatable 360-degree presentation of the object;
automatically merging the ordered images of the at least one sequence into at least one 360-degree view of the object; and automatically merging the at least one 360-degree view of the object into the interactive rotatable 360-degree presentation of the object.
17
1
18
1
19
1
20
1
21
5
22
8
23. The system of claim 16, wherein the at least one hotspot automatically follows an appropriate point on the interactive rotatable 360-degree presentation as the interactive rotatable 360-degree presentation rotates.
1
25
5-7
26
8
27
1
28. The system of claim 16, wherein the plurality of images comprises an exterior image of the object, an interior image of the object, and a close-up image of the object.
1
29
5-6
30
8
31
8
32. A computer implemented method, the method comprising:
obtaining data describing an object, the data describing the first object including information about a plurality of images of the object to be obtained and converted into an interactive rotatable 360-degree presentation of the object;
obtaining the plurality of images of the object;
automatically rearranging the plurality of images into at least one sequence, the at least one sequence including ordered images from a plurality of viewing angles of the object substantially evenly distributed around 360 degrees; and
automatically determining whether to add at least one hotspot to at least one image in the at least one sequence.
1. A computer-implemented method for generating an interactive rotatable 360-degree presentation of an object, the method comprising:
obtaining, from a first source, data describing the object, the data describing the object including (i) information about a plurality of images of the object to be obtained and converted into the interactive rotatable 360-degree presentation of the object, and (ii) additional information about the object;
automatically obtaining the plurality of images of the object;
automatically rearranging the plurality of images into at least one sequence, the at least one sequence including ordered images from a plurality of viewing angles of the object substantially evenly distributed around 360 degrees, wherein the at least one sequence is defined by a plurality of sequence strings which specify an order of the ordered images, wherein each sequence string of the plurality of sequence strings comprises one or more sub strings separated by spaces; determining, for each sequence string of the plurality of sequence strings, whether each substring of the one or more substrings begins with a character indicating reverse order; appending, for any substring beginning with the character indicating the reverse order, a range of indices in the reverse order to a list of indices corresponding to the ordered images;
automatically determining whether to add at least one hotspot to at least one image in the at least one sequence, and if the at least one hotspot is to be added, automatically adding the at least one hotspot to the at least one image, the at least one hotspot being associated with a separate media element and enabling access to the separate media element when selected by a viewer of the interactive rotatable 360-degree presentation of the object;
automatically merging the ordered images of the at least one sequence into at least one 360-degree view of the object; and automatically merging the at least one 360-degree view of the object into the interactive rotatable 360-degree presentation of the object.
33
1
34
1
35. A system for generating an interactive rotatable 360-degree presentation of a first object, the system comprising: an interface configured to:
obtain data describing the first object, the data including information about a plurality of images of the first object to be obtained and converted into the interactive rotatable 360-degree presentation of the first object, and additional information about the first object; and
obtain the plurality of images of the first object; and
processor configured to:
automatically merge the plurality of images into at least one interactive rotatable 360-degree view of the first object, the at least one interactive rotatable 360-degree view comprising at least one of an exterior view of the first object and an interior view of the first object;
display a first image of the plurality of images for the interactive rotatable 360-degree presentation; receive a drag event; determine, responsive to the receiving, a drag distance of the drag event; determine, based on the drag distance, a display angle of the first object; and display, based on the display angle, a second image of the plurality of images for the interactive rotatable 360-degree presentation.
1. A computer-implemented method for generating an interactive rotatable 360-degree presentation of an object, the method comprising:
obtaining, from a first source, data describing the object, the data describing the object including (i) information about a plurality of images of the object to be obtained and converted into the interactive rotatable 360-degree presentation of the object, and (ii) additional information about the object;
automatically obtaining the plurality of images of the object;
automatically rearranging the plurality of images into at least one sequence, the at least one sequence including ordered images from a plurality of viewing angles of the object substantially evenly distributed around 360 degrees, wherein the at least one sequence is defined by a plurality of sequence strings which specify an order of the ordered images, wherein each sequence string of the plurality of sequence strings comprises one or more sub strings separated by spaces; determining, for each sequence string of the plurality of sequence strings, whether each substring of the one or more substrings begins with a character indicating reverse order; appending, for any substring beginning with the character indicating the reverse order, a range of indices in the reverse order to a list of indices corresponding to the ordered images;
automatically determining whether to add at least one hotspot to at least one image in the at least one sequence, and if the at least one hotspot is to be added, automatically adding the at least one hotspot to the at least one image, the at least one hotspot being associated with a separate media element and enabling access to the separate media element when selected by a viewer of the interactive rotatable 360-degree presentation of the object;
automatically merging the ordered images of the at least one sequence into at least one 360-degree view of the object; and automatically merging the at least one 360-degree view of the object into the interactive rotatable 360-degree presentation of the object.
Claims 16-22, 25-27 and 29-31 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 12,039,646 in view of Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1). Claim 23 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 12,039,646 in view of Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1) and Lanahan et al. (U.S. Patent Application 2010/0005417 A1). Claim 28 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 12,039,646 in view of Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1) and Yamamoto (U.S. Patent Application Publication 2009/0244314 A1). Claims 32-34 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 12,039,646. Claim 35 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent 12,039,646 in view of Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1) and Zhang et al. (U.S. Patent Application Publication 2014/0320537 A1).
For independent claim 16, claim 1 of U.S. Patent 12,039,646 does not disclose a system comprising an interface and a processor. However, these limitations are well-known in the art as disclosed in Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1). It would have been obvious to apply the use of removable media in combination with a processor to appropriately implement the functions of a computer system in presenting a 360 degree view of an object (page 2/par. 18 and 22 and page 4/par. 60) as disclosed in Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1). Claim 1 of U.S. Patent 12,039,646 otherwise discloses a method of steps corresponding to the functions of the system of claim 16 as shown in the claim chart above. Therefore, claim 16 is not patentably distinct from claim 1 of U.S. Patent 12,039,646.
For dependent claims 17-22, 25-27 and 29-31, claim 1 of U.S. Patent 12,039,646 mirrors and recites the limitations of claims 17-22, 25-27 and 29-31 as shown in the claim chart above. Therefore, claims 17-22, 25-27 and 29-31 are not patentably distinct from claim 1 of U.S. Patent 12,039,646.
For dependent claim 23, claim 1 of U.S. Patent 12,039,646 does not disclose a hotspot is associated with a point. However, these limitations are well-known in the art as disclosed in Lanahan et al. (U.S. Patent Application 2010/0005417 A1). It would have been obvious to apply to use of the designation of a point in media for the addition of a hotspot and an associated media element for access by a user (page 3/par. 37) as taught in Lanahan et al. (U.S. Patent Application 2010/0005417 A1) so that the hotspot automatically follows the point as the interactive rotatable 360-degree presentation rotates. Therefore, claim 23 is not patentably distinct from claim 1 of U.S. Patent 12,039,646.
For dependent claim 28, claim 1 of U.S. Patent 12,039,646 does not disclose a close-up image. However, these limitations are well-known in the art as disclosed in Yamamoto (U.S. Patent Application Publication 2009/0244314 A1). It would be obvious to apply the use of an arrangement of images including a close-up image to present an image at a greater focus (page 3/par. 45) as taught in Yamamoto (U.S. Patent Application Publication 2009/0244314 A1). Therefore, claim 28 is not patentably distinct from claim 1 of U.S. Patent 12,039,646.
For independent claim 32, claim 1 of U.S. Patent 12,039,646 anticipates and discloses the same limitations of claim 32. Therefore, claim 32 is not patentably distinct from claim 1 of U.S. Patent 12,039,646.
For dependent claims 33-34, claim 1 of U.S. Patent 12,039,646 mirrors and recites the limitations of claims 33-34 as shown in the claim chart above. Therefore, claims 33-34 are not patentably distinct from claim 1 of U.S. Patent 12,039,646.
For independent claim 35, claim 1 of U.S. Patent 12,039,646 does not disclose a system comprising an interface and a processor for displaying a first image; receiving a drag event; determining responsive to the receiving, a drag distance of the drag event; determining, based on the drag distance, a display angle of a first object; and displaying, based on the display angle, a second image. However, these limitations are well-known in the art as disclosed in Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1) and Zhang et al. (U.S. Patent Application Publication 2014/0320537 A1). It would have been obvious to apply the use of removable media in combination with a processor to appropriately implement the functions of a computer system in presenting a 360 degree view of an object (page 2/par. 18 and 22 and page 4/par. 60) for displaying a first image of a plurality of image, determining a user dragging event for selecting the first image and determining an angle of rotation from the dragging event to replace the first image with a second image of the plurality of images corresponding to the determined angle of rotation for appropriately rotating an object (page 12/par. 226) where each image of the presentation can have a plurality of hot spots such that each hot spot may be selected by a viewer to access a corresponding separate media file so that the first image and the second image corresponding to the determined angle of rotation may present hot spots for selection by the viewer to access a corresponding separate media file (page 4/par. 62, page 11/par. 192-193 and page 12/par. 226) as disclosed in Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1). It would have been further obvious to apply the determination of a drag distance as a change in position from an initial position for a drag event to appropriately calculate a rotation angle for rotating a viewing angle of an object for display (page 3/par. 40) as disclosed in Zhang et al. (U.S. Patent Application Publication 2014/0320537 A1). Claim 1 of U.S. Patent 12,039,646 otherwise discloses a method of steps corresponding to the method of claim 35 as shown in the claim chart above. Therefore, claim 35 is not patentably distinct from claim 1 of U.S. Patent 12,039,646.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 16-22, 24, 26, 27 and 30-34 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Ramamoorthy (U.S. Patent Application Publication 2002/0085219 A1) (made of record of the IDS submitted 9/17/2024).
For claim 16, Ramamoorthy discloses a system for generating an interactive rotatable 360-degree presentation of an object (see Ramamoorthy at [02 and 73-74] with the generation of a multi-dimensional image of an object from a series of successive images of an object to form a 360-degree presentation of the object), the system comprising:
an interface configured to (see Ramamoorthy at [60] where the data can be stored on a storage device, [22] describes the various removable media that can store the matrix data, such as a CD-ROM):
obtain data describing the object (see Ramamoorthy at [189] with the metadata and the image matrix data),
and obtain a plurality of images of the object (see Ramamoorthy at [60] with the automatic capture of images to copy an object, see the image capture method of [58] that describes the capture of a series of images to store in a main image matrix);
and a processor configured to (see Ramamoorthy at [18] with the image processor): automatically rearrange the plurality of images into at least one sequence (see Ramamoorthy at [58] with the creation of a smoothly turning image set of the object which is stored in an organized fashion in a main image matrix),
the at least one sequence including ordered images from a plurality of viewing angles of the object substantially evenly distributed around 360 degrees (see Ramamoorthy at [73] with the image capturing procedure which captures images at a predetermined angular interval (alpha) across a 360 degree rotation, the images are stored in an ordered fashion in a main image matrix according to [75]);
automatically determine whether to add at least one hotspot to at least one image in the at least one sequence (see Ramamoorthy at [62] with composer which can add hotspots to an image, and where the hotspot information can be generated automatically by inserting a colored material on the object that acts as a trigger).
For claim 17, depending on claim 16, Ramamoorthy discloses wherein the processor is further configured to:
automatically merge the ordered images of the at least one sequence into at least one 360-degree view of the object (see Ramamoorthy at [178] with the viewer system that makes the actual presentation, see also [17 and 73-74] where each image data set is representative of an image of the object as viewed from an associated image capture viewing angle across 360-degrees of the object);
and automatically merge the at least one 360-degree view of the object into the interactive rotatable 360-degree presentation of the object (see Ramamoorthy at [73-74 and 222] with the user input to interact with the 3D space to manipulate the display of the object to perform a simulated rotation of the 360-degree presentation of the object).
For claim 18, depending on claim 16, Ramamoorthy discloses wherein the at least one hotspot is associated with a separate media element (see Ramamoorthy at [62] where the hotspots can be embedded to trigger a presentation of metadata or web link connection, see [193] where if a hotspot is selected during presentation then a control action is initiated and could result in display of any external or internal metadata such as audio, video, graphics).
For claim 19, depending on claim 18, Ramamoorthy discloses wherein the processor is further configured to enable access to the separate media element when selected by a viewer of the interactive rotatable 360-degree presentation of the object (see Ramamoorthy at [62] where the hotspots can be embedded to trigger a presentation of metadata or web link connection, see [193] where if a hotspot is selected during presentation then a control action is initiated and could result in display of any external or internal metadata such as audio, video, graphics).
For claim 20, depending on claim 18, Ramamoorthy discloses wherein the processor is further configured to: receive a user input selecting at least one hotspot, the at least one hotspot enabling access to a separate media element when selected by a viewer of the interactive rotatable 360-degree presentation of the object (see Ramamoorthy at [62] with composer which can add hotspots to an image, and where the hotspot information can be generated automatically by inserting a colored material on the object that acts as a trigger; see Ramamoorthy at [62] where the presence of a colored material on the object will determine if a hotspot is to be automatically added with the automatic addition of hotspot information which is embedded in an image; see Ramamoorthy at [62] where the hotspots can be embedded to trigger a presentation of metadata or web link connection, see [193] where if a hotspot is selected during presentation then a control action is initiated and could result in display of any external or internal metadata such as audio, video, graphics); and display, in response to the user input, the separate media element (see Ramamoorthy at [62] where the hotspots can be embedded to trigger a presentation of metadata or web link connection, see Ramamoorthy at [193] where if a hotspot is selected during presentation then a control action is initiated and could result in display of any external or internal metadata such as audio, video, graphics).
For claim 21, depending on claim 18, Ramamoorthy discloses wherein the at least one hotspot is a conditional hotspot that is added to a plurality of interactive rotatable 360-degree presentations that meet certain criteria (see Ramamoorthy at [62], [189-190] and [193] where the hotspot is added to the image according to external metadata information corresponding to colored material that acts as a trigger as a conditional hotspot meeting criteria of a user generation or automatic generation of the colored material on the object and a user clicking on the hotspot trigger to activate the conditional hotspot).
For claim 22, depending on claim 18, Ramamoorthy discloses wherein the separate media element comprises any of text, an additional image, a video, a web page link, and an additional interactive rotatable 360-degree presentation (see Ramamoorthy at [193] with the hotspot being linked to the presentation of audio files, video files, graphic files, or text files).
For claim 24, depending on claim 16, Ramamoorthy discloses wherein the processor is further configured to resize the plurality of images to a common size (see Ramamoorthy at [66] where the images of the object are captured by the camera and then scaled to a smaller size in a consistent manner, see also Ramamoorthy at [232] where a zoom button enables a user to expand the pixels of a selected area of a currently displayed image so that they fit in the same size as the original image).
For claim 26, depending on claim 16, Ramamoorthy discloses wherein the processor is further configured to determine a position for the at least one hotspot on the at least one image (see Ramamoorthy at [62] with the composer which can add hotspots to an image, and where the hotspot information can be generated based on a preplaced colored material on the object that acts as a trigger which corresponds to the object and see Ramamoorthy at [193] where a user clicks on the position of the hotspot trigger in the image for activating media elements associated with the hotspot).
For claim 27, depending on claim 16, Ramamoorthy discloses wherein automatically rearranging the plurality of images comprises determining an order for the plurality of images (see Ramamoorthy at [196-211] with the numbering system for images which specifies the order for arrangement).
For claim 30, depending on claim 16, Ramamoorthy discloses wherein the processor is further configured to determine a position for the at least one hotspot on each of the ordered images based on predetermined information corresponding to the object (see Ramamoorthy at [62] with the composer which can add hotspots to an image, and where the hotspot information can be generated based on a preplaced colored material on the object that acts as a trigger which corresponds to the object).
For claim 31, depending on claim 30, Ramamoorthy discloses wherein the processor is further configured to determining the separate media element based on the predetermined information corresponding to the object (see Ramamoorthy at [193] where the hotspot is linked by a predetermined association to a portion of an image that is operated to provide further information about the object, where that information can be a presentation of meta-data such as audio, video, or web links), the separate media element being any of text, an additional image, a video, a web page link, and an additional interactive rotatable 360-degree presentation (see Ramamoorthy at [193] with the hotspot being linked to the presentation of audio files, video files, graphic files, or text files).
For claim 32, Ramamoorthy discloses a computer implemented method, the method comprising ([02 and 73-74] see the generation of a multi-dimensional image of an object from a series of successive images of an object to form a 360-degree presentation of the object):
obtaining data describing the object ([189] see the metadata and the image matrix data),
the data describing the first object including information about a plurality of images of the object to be obtained and converted into the interactive rotatable 360-degree presentation of the object ([179] see the .hlg data file which contains information about file generation and data streams pertaining to main image matrix coded image data, this is the image data that is used to generate the three dimensional object presentation according to [213, 215]);
obtaining the plurality of images of the object ([60] see the automatic capture of images to copy an object, see the image capture method of [58] that describes the capture of a series of images to store in a main image matrix);
automatically rearranging the plurality of images into at least one sequence ([58] see the creation of a smoothly turning image set of the object which is stored in an organized fashion in a main image matrix),
the at least one sequence including ordered images from a plurality of viewing angles of the object substantially evenly distributed around 360 degrees ([73] see the image capturing procedure which captures images at a predetermined angular interval (alpha) across a 360 degree rotation, the images are stored in an ordered fashion in a main image matrix according to [75]);
automatically determining whether to add at least one hotspot to at least one image in the at least one sequence ([62] see composer which can add hotspots to an image, and where the hotspot information can be generated automatically by inserting a colored material on the object that acts as a trigger).
For claim 33, depending on claim 32, Ramamoorthy discloses further comprising:
automatically merging the ordered images of the at least one sequence into at least one 360-degree view of the object ([178] see the viewer system that makes the actual presentation, see also [17 and 73-74] where each image data set is representative of an image of the object as viewed from an associated image capture viewing angle across 360-degrees of the object);
and automatically merging the at least one 360-degree view into the interactive rotatable 360-degree presentation of the object ([73-74 and 222] see the user input to interact with the 3D space to manipulate the display of the object to perform a simulated rotation).
For claim 34, depending on claim 32, Ramamoorthy discloses wherein the at least one hotspot is associated with a separate media element ([62] see where the hotspots can be embedded to trigger a presentation of metadata or web link connection, see [193] where if a hotspot is selected during presentation then a control action is initiated and could result in display of any external or internal metadata such as audio, video, graphics).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim 23 is rejected under 35 U.S.C. 103 as being unpatentable over Ramamoorthy in view of Lanahan et al. (U.S. Patent Application 2010/0005417 A1, hereinafter “Lanahan”).
For claim 23, depending on claim 16, Ramamoorthy does not disclose a hotspot is associated with a point.
However, these limitations are well-known in the art as disclosed in Lanahan.
Lanahan similarly discloses a system and method for rotating a presentation of media (page 3/par. 37). Lanahan explains a user may designate a point in its media for the addition of a hotspot and an associated media element for access by a user (page 3/par. 37). It follows Ramamoorthy may be accordingly modified with the teachings of Lanahan to designate an appropriate point on its at least one image to add its at least one hotspot so that the hotspot automatically follows the point as the interactive rotatable 360-degree presentation rotates.
A person having ordinary skill in the art (PHOSITA) before the effective filing date of the claimed invention would find it obvious to modify Ramamoorthy with the teachings of Lanahan. Lanahan is analogous art in dealing with a system and method rotating a presentation of media (page 3/par. 37). Lanahan discloses its association of a hotspot with a designated point is advantageous in appropriately presenting additional information for an associated media item at a desired point (page 3/par. 37). Consequently, a PHOSITA would incorporate the teachings of Lanahan into Ramamoorthy for appropriately presenting additional information for an associated media item at a desired point. Therefore, claim 23 is rendered obvious to a PHOSITA before the effective filing date of the claimed invention.
Claim 25 is rejected under 35 U.S.C. 103 as being unpatentable over Ramamoorthy in view of Hauk (U.S. Patent Application Publication 2014/0152806 A1) (made of record of the IDS submitted 9/17/2024).
For claim 25, depending on claim 16, Ramamoorthy discloses wherein the interactive rotatable 360-degree presentation comprises an interior view of the object and an exterior view of the object (see Ramamoorthy at [75], [226] and Fig. 16 where a user may perform the simulated rotation to view exterior view images of an object such as a drill; see also Ramamoorthy at [76] where an object may be manipulated by opening a door which would expose an interview view and [157] where a part may be removed which would also expose an interior view).
Examiner finds Ramamoorthy discloses a 360-degree view comprising an exterior view image of an object and an interior view image of the object.
In any case, these limitations are well-known in the art as disclosed in Hauk.
Hauk similarly discloses a system and method for presenting a sequence of images for desired objects (page 1/par. 5) where the images present a 360 degree view of the desired objects (page 3/par. 28). Hauk explains its 360 degree presentation of an object such as a vehicle may include images of the vehicle’s exterior (page 3/par. 28) and may further include images of the vehicle’s interior (page 3/par. 31). It follows Ramamoorthy may be accordingly modified with the teachings of Hauk to incorporate exterior and interior view images of its object in its interactive rotatable 360-degree view.
A PHOSITA before the effective filing date of the claimed invention would find it obvious to modify Ramamoorthy with the teachings of Hauk. Hauk is analogous art in dealing with a system and method for presenting a sequence of images for desired objects (page 1/par. 5) where the images present a 360 degree view of the desired objects (page 3/par. 28). Hauk discloses its 360 degree view of desired objects is advantageous in implementing a virtual showroom so that a user may effectively view exterior and interior of desired objects to make an informed purchase (page 1/par. 5; and page 3/par. 28 and 31). Consequently, a PHOSITA would incorporate the teachings of Hauk into Ramamoorthy for implementing a virtual showroom so that a user may effectively view exterior and interior of desired objects to make an informed purchase. Therefore, claim 25 is rendered obvious to a PHOSITA before the effective filing date of the claimed invention.
Claim 28 is rejected under 35 U.S.C. 103 as being unpatentable over Ramamoorthy in view of Hauk further in view of Yamamoto (U.S. Patent Application Publication 2009/0244314 A1) (made of record of the IDS submitted 9/17/2024).
For claim 28, depending on claim 16, Ramamoorthy as modified by Hauk discloses wherein the plurality of images comprises an exterior image of the object and an interior image of the object (see Ramamoorthy at [75], [226] and Fig. 16 where a user may perform the simulated rotation to view exterior view images of an object such as a drill; see also Ramamoorthy at [76] where an object may be manipulated by opening a door which would expose an interview view and [157] where a part may be removed which would also expose an interior view; Hauk similarly discloses a system and method for presenting a sequence of images for desired objects (page 1/par. 5) where the images present a 360 degree view of the desired objects (page 3/par. 28); Hauk explains its 360 degree presentation of an object such as a vehicle may include images of the vehicle’s exterior (page 3/par. 28) and may further include images of the vehicle’s interior (page 3/par. 31) and it follows Ramamoorthy may be accordingly modified with the teachings of Hauk to incorporate exterior and interior view images of its object in its interactive rotatable 360-degree view).
Ramamoorthy as modified by Hauk does not disclose a close-up image.
However, these limitations are well-known in the art as disclosed in Yamamoto.
Yamamoto similarly discloses a system and method for displaying a plurality of images (page 1/par. 2). Yamamoto explains its images may be presented as an arrangement of images including a close-up image (page 3/par. 45). It follows Ramamoorthy and Hauk may be accordingly modified with the teachings of Yamamoto to accommodate a close-up image of its objects among its plurality of images.
A PHOSITA before the effective filing date of the claimed invention would find it obvious to modify Ramamoorthy and Hauk with the teachings of Yamamoto. Yamamoto is analogous art in dealing with a system and method for displaying a plurality of images (page 1/par. 2). Yamamoto discloses its use of a close-up image is advantageous in appropriately attracting attention more than other images arranged in a plurality of images (page 3/par. 45). Consequently, a PHOSITA would incorporate the teachings of Yamamoto into Ramamoorthy and Hauk for appropriately attracting attention more than other images arranged in a plurality of images. Therefore, claim 28 is rendered obvious to a PHOSITA before the effective filing date of the claimed invention.
Claim 35 is rejected under 35 U.S.C. 103 as being unpatentable over Ramamoorthy in view of Hauk further in view of Zhang et al. (U.S. Patent Application Publication 2014/0320537 A1, hereinafter “Zhang”) (made of record of the IDS submitted 9/17/2024).
For claim 35, Ramamoorthy discloses a system for generating an interactive rotatable 360-degree presentation of a first object (see Ramamoorthy at [02] with the generation of a multi-dimensional image of an object from a series of successive images of an object where the images form a rotatable 360-degree presentation [73-75]), the system comprising:
an interface configured to (see Ramamoorthy at [60] where the data can be stored on a storage device, [22] describes the various removable media that can store the matrix data, such as a CD-ROM):
obtain data describing the first object, the data including information about a plurality of images of the object to be obtained and converted into the interactive rotatable 360-degree presentation of the first object (see Ramamoorthy at [57-58] for disclosing data for controlling a turn table arrangement to capture a plurality of images as arranged images of an object placed within the turn table for subsequent conversion into the rotatable 360 degree presentation [73-75]),
and additional information about the first object ([189-190] see the internal and external metadata that includes information to explain the image data in a bigger context);
and obtain the plurality of images of the first object (see Ramamoorthy at [60] with the automatic capture of images to copy an object, see the image capture method of [57-58] that describes the capture of the plurality of images as the arranged images by a turn table arrangement in response to obtaining the data for controlling the turn table arrangement in capturing the plurality of images);
and a processor configured to (see Ramamoorthy at [18] with the image processor):
automatically merge the plurality of images into at least one interactive rotatable 360-degree view of the first object (see Ramamoorthy at [178] with the viewer system that makes the actual presentation, see also [17] where each image data set is representative of an image of the object as viewed from an associated image capture viewing angle; see [222] with the user input to interact with the 3D space to manipulate the display of the object to perform a simulated rotation),
the at least one interactive rotatable 360-degree view comprising at least one of an exterior view of the first object and an interior view of the first object (see Ramamoorthy at [75], [226] and Fig. 16 where a user may perform the simulated rotation to view exterior view images of an object such as a drill; see also [76] where an object may be manipulated by opening a door which would expose an interview view and [157] where a part may be removed which would also expose an interior view; Hauk similarly discloses a system and method for presenting a sequence of images for desired objects (page 1/par. 5) where the images present a 360 degree view of the desired objects (page 3/par. 28); Hauk explains its 360 degree presentation of an object such as a vehicle may include images of the vehicle’s exterior (page 3/par. 28) and may further include images of the vehicle’s interior (page 3/par. 31); and it follows Ramamoorthy may be accordingly modified with the teachings of Hauk to incorporate exterior and interior view images of its object in its interactive rotatable 360-degree view);
display a first image of the plurality of images of the interactive rotatable 360-degree presentation (see Ramamoorthy at [226] for disclosing the display of a first image of the plurality of images for its interactive 360-degree presentation);
receive a drag event (see Ramamoorthy at [226] for disclosing the determination of user input such as a user operating a mouse to select the first image by dragging);
determine, based on the drag event, a display angle of the first object (see Ramamoorthy at [226] for disclosing the determination of an angle of rotation from the drag event as a display angle for displaying its object at the angle of rotation); and
display, based on the display angle, a second image of the plurality of images for the interactive rotatable 360-degree presentation (see Ramamoorthy at [226] for disclosing the first image is replaced with a second image of the plurality of images corresponding to the determined angle of rotation for rotating its object for display in displaying its interactive 360-degree presentation).
Ramamoorthy as modified by Hauk does not specifically disclose determining, responsive to receiving, a drag distance of a drag event; and determining, based on the drag distance, a display angle.
However, these limitations are well-known in the art as disclosed in Zhang.
Zhang similarly discloses a system and method for controlling an interactive 360-degree presentation of an object such as an electronic map (page 1/par. 2-4). Zhang discloses the detection of a screen dragging as a dragging event to be performed on the electronic map (page 3/par. 40). Zhang discloses a drag distance as a change in position from an initial position from clicking on the electronic map is determined from the screen dragging to calculate a rotation angle for rotating a viewing angle of the electronic map for display (page 3/par. 40). It follows Ramamoorthy and Hauk may be accordingly modified with the teachings of Zhang to determine a drag distance as a change in position for its drag event for calculating its angle of rotation for rotating its object.
A PHOSITA before the effective filing date of the claimed invention would find it obvious to modify Ramamoorthy and Hauk with the teachings of Zhang. Zhang is analogous art in dealing with a system and method for controlling an interactive 360-degree presentation of an object such as an electronic map (page 1/par. 2-4). Zhang discloses its determination of a drag distance of a dragging event is advantageous in appropriately calculating a rotation angle and viewing angle for rotating an object for display (page 3/par. 40). Consequently, a PHOSITA would incorporate the teachings of Zhang into Ramamoorthy and Hauk for appropriately calculating a rotation angle and viewing angle for rotating an object for display. Therefore, claim 35 is rendered obvious to a PHOSITA before the effective filing date of the claimed invention.
Allowable Subject Matter
Claim 29 would be allowable if rewritten in independent form and upon submission of any suitable terminal disclaimers.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHARLES TSENG whose telephone number is (571)270-3857. The examiner can normally be reached on 8-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached on (571) 272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHARLES TSENG/ Primary Examiner, Art Unit 2613