Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
Claim 16 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 16 is in the apparatus claim set (claims 11-19) and corresponds to method claim 6, but it improperly depends from claim 2, which is a method claim. This improper dependency creates an inconsistent claim type and renders the scope of claim 16 indefinite. Suggestion: Amend claim 16 to depend from an appropriate apparatus claim. Necessary correction is required.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1, 2, 3, 5, 6, 11, 12, 13, 15, 16, and 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Tsuda et. al (U.S. Patent Publication No. 2016/0234149).
Regarding claim 1, Tsuda discloses a virtual object (interpreted as avatar) display method, performed by a terminal device, comprising [Tsuda: 0110 “communication terminal 10 may be stored in another device on a network such as server device 20. A display device constituting touch screen unit 13 may separate from communication terminal 10.”]: displaying a first area of a virtual social scene and at least one virtual object located in the first area of a user interface (interpreted as displaying a region of a virtual environment in the user interface for socializing)[Tsuda: 0038 “A three-dimensional image may be obtained by capturing an image of avatars arranged in a virtual space using a virtual camera”][Tsuda: 0103 “A user whose avatar is displayed in avatar display area Ar12 may be a user who has posted a message that satisfies a predetermined condition. The message satisfying a predetermined condition may be a message having a level of importance or urgency that equals or exceeds a predetermined threshold, a message that remains unread”](teaches displaying a UI region for messaging which corresponds to social scene, further teaches avatars in a virtual space which is interpreted as a social scene); and based on a first virtual object in the virtual social scene having social information to be displayed [Tsuda: 0041 “In avatar display area Ar12, also appearing are messages that have been posted by users represented by displayed avatars AV1.”](teaches messages (social scene) are also displayed, represented by avatars (corresponding to based on first virtual object)), and based on the first virtual object being located outside a second area of the virtual social scene comprising an entirety or a part of the first area, displaying the first virtual object in the user interface and displaying the social information (interpreted as first virtual object outside a second area of the virtual social scene where the second area is all of or part of the first area, then the system displays the virtual object in the UI an displays the social information)[Tsuda: 0038 “Each of group display areas Ar1 includes group information display area Ar11 and avatar display area”][Tsuda: 0050 “selection screens shown in FIGS. 7A to 7C is performed when a new message is posted by a user belonging to group A”](teaches a second virtual area and displaying virtual objects and message which corresponds to social scene).
Regarding claim 2, Tsuda discloses the method according to claim 1, wherein the displaying the first virtual object comprises: moving the first virtual object from outside the second area to the second area displayed in the user interface (interpreted as displaying the first object is carried out by changing the position of the first virtual object over time so it starts outside the second area and then moves into that second area which is displayed on the UI) [Tsuda: 0050 “whose avatar is not displayed in avatar display area Ar12.”][Tsuda: 0052 “new avatarAV1 appears in avatar display area Ar12”](teaches the ability to have the avatar outside and insides the display area).
Regarding claim 3, Tsuda discloses the method according to claim 2, wherein the moving the first virtual object comprises: determining a movement parameter of the first virtual object according to a first location at which the first virtual object is currently located and a second location to which the first virtual object needs to move, the second location being located in the second area; and moving, according to the movement parameter, the first virtual object from the first location to the second location (interpreted as determining where to move the virtual object from first to second area)[Tsuda: 0053 “the screen transition animation also shows a process of relocating avatars AV1 are in response to posting of a new message”](teaches relocating the avatars (virtual objects)).
Regarding claim 5, Tsuda discloses the method according to claim 3, wherein after the moving the from the first location to the second location, the method further comprises: based on the first virtual object meeting a first condition, moving the first virtual object from the second location back to the first location (interpreted as based on a condition, moving the object from a first to second location)[Tsuda: 0053 “The screen transition animation also shows a process of relocating avatars AV1 are in response to posting of a new message.”](teaches relocation the avatars (objects) under the condition of posting a message).
Regarding claim 6, Tsuda discloses the method according to claim 2, wherein the first virtual object is displayed in a first form when moving [Tsuda: 0054 “characters represented by avatars AV1 are shown to move by running”], wherein the first virtual object is displayed in a second form when not moving [Tsuda: 0040 “Each of avatars AV1 shows a standing full-faced character”](teaches the avatars (objects) displayed standing meaning they are not moving), and wherein the first form is different from the second form (the previous two quotes are 2 separate forms which are both different from one another, one form is running while the other is standing).
Claims 11 and 20 are apparatus and non-transitory computer readable storage medium claims corresponding to method claim 1 without any additional limitations. Thus, claims 11 and 20 are rejected for the same reasons as claim 1 above.
Claim 12 is an apparatus claim corresponding to method claim 2 without any additional limitations. Thus, claim 12 is rejected for the same reasons as claim 2 above.
Claim 13 is an apparatus claim corresponding to method claim 3 without any additional limitations. Thus, claim 13 is rejected for the same reasons as claim 3 above.
Claim 15 is an apparatus claim corresponding to method claim 5 without any additional limitations. Thus, claim 15 is rejected for the same reasons as claim 5 above.
Claim 16 is an apparatus claim corresponding to method claim 6 without any additional limitations. Thus, claim 16 is rejected for the same reasons as claim 6 above.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 4, 7, 8, 9, 10, 14, 17, 18, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Tsuda et al. (U.S. Patent Publication No. 2016/0234149), in view of Myhill et al. (CA3056269).
Regarding claim 4, Tsuda discloses the method according to claim 3, but fails to explicitly disclose wherein before the determining the movement parameter, the method further comprises: determining a location point closest to the first location on a bounding box of the second area as the second location.
However, Myhill discloses wherein before the determining the movement parameter, the method further comprises: determining a location point closest to the first location on a bounding box of the second area as the second location (interpreted as choose the destination point by taking the bounding box of the second area and selecting the point on the bounding box that is closest to the first location)[Myhill: 0049 “module finds the point on the edge of the supplied boundary which is closest from the target”][Myhill: 0049 “The composition module then compares the target position in screen space with the selected target zone.”](teaches exact geometry operation calculation based on the closest target).
Tsuda and Myhill are considered to be analogous to the claimed invention because they are in the same field of displaying virtual objects. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Tsuda to incorporate Myhill’s teachings of position calculation. The motivation for such a combination would provide the benefit of minimizing movement distance.
Regarding claim 7, Tsuda discloses the method according to claim 1, but fails to explicitly disclose wherein the displaying the first virtual object comprises: displaying a third area of the virtual social scene comprising the first virtual object.
However, Myhill discloses wherein the displaying the first virtual object comprises: displaying a third area of the virtual social scene comprising the first virtual object [Myhill: 0036 “the composition module dynamically changes the orientation of the camera to keep the target inside the target zone”](teaches changing orientation of the camera meaning it shows a 3rd angle/scene while comprising the target/object).
Tsuda and Myhill are considered to be analogous to the claimed invention because they are in the same field of displaying virtual objects. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Tsuda to incorporate Myhill’s teachings of showing a 3rd angle comprising the target. The motivation for such a combination would provide the benefit of predictable visibility control.
Regarding claim 8, Tsuda discloses the method according to claim 7, but fails to explicitly disclose wherein the displaying the third area comprises: determining an offset parameter of the first area according to a first location at which the first virtual object is currently located; determining an adjusted first area as the third area based on adjusting a location of the first area according to the offset parameter; and displaying the third area in the user interface.
However, Myhill discloses wherein the displaying the third area comprises: determining an offset parameter (interpreted as a value that specifies how the currently displayed area will be shifted) of the first area according to a first location at which the first virtual object is currently located [Myhill: 0049 “The composition module then compares the target position in screen space with the selected target zone”][Myhill: 0049 “The composition module then determines the angle between the two vectors and uses the value to scale the time-constant tracking values”]; determining an adjusted first area as the third area based on adjusting a location of the first area according to the offset parameter; and displaying the third area in the user interface (interpreted as forming a new displayed area by adjusting the location of the originally displayed first area) [Myhill: 0049 “If the selected target zone (e.g. central bounding region) in viewport space does not contain the target, then the composition module must readjust the camera orientation”][Myhill: 0050 “The composition module then uses the resulting quaternions to rotate the camera by the requested amount”](teaches adjusting the displayed view by reorienting the camera, meaning it is readjusting the area based on the results).
Tsuda and Myhill are considered to be analogous to the claimed invention because they are in the same field of displaying virtual objects. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Tsuda to incorporate Myhill’s teachings of adjusting the displayed view. The motivation for such a combination would provide the benefit of shifting the view to bring the object into display.
Regarding claim 9, Tsuda discloses the method according to claim 8, but fails to explicitly disclose wherein the determining the offset parameter comprises: determining a location point closest to the first location on a bounding box of the second area as a third location; and determining the offset parameter of the first area according to the first location at which the first virtual object is currently located and the third location of the second area.
However, Myhill discloses wherein the determining the offset parameter comprises: determining a location point closest to the first location on a bounding box of the second area as a third location (interpreted as selecting a third location by taking the bounding box of the second area and choosing the point on that bounding box that is closest to the virtual objects current location)[Myhill: 0049 “the composition module finds the point on the edge of the supplied boundary which is closest from the target so that the camera will rotate on the shortest path which puts the target into the desired composition target zone”](teaches selecting the point on the edge/boundary that is closest to the target); and determining the offset parameter (interpreted as a numeric adjustment that shifts/reframes the displayed area) of the first area according to the first location at which the first virtual object is currently located and the third location of the second area [Myhill: 0049 “The composition module then calculates two vectors; first, a vector from the camera origin to the target and second, a vector from the camera origin to the point on the boundary calculated in the previous step. Both the vectors are calculated in camera coordinates. The composition module then projects the two vectors onto the right axis to use as basis for horizontal tracking and it projects the two vectors onto the up axis to use as a basis for the vertical tracking. The composition module then determines the angle between the two vectors”](teaches determining an adjustment amount by using a vector from the cameras origin to the target and a vector from the camera origin to the closest boundary point).
Tsuda and Myhill are considered to be analogous to the claimed invention because they are in the same field of displaying virtual objects. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Tsuda to incorporate Myhill’s teachings of determining an adjustment parameter based on the target objects current on screen location. The motivation for such a combination would provide the benefit of straightforward and predictable viewport adjustment.
Regarding claim 10, Tsuda discloses the method according to claim 7, but fails to explicitly disclose wherein after the displaying the third area comprises: based on the first virtual object meeting a first condition, switching an area displayed in the user interface from the third area back to the first area.
However, Myhill discloses wherein after the displaying the third area comprises: based on the first virtual object meeting a first condition, switching an area displayed in the user interface from the third area back to the first area [Myhill: 0058 “the collider module attempts to preserve the original camera height by pushing the camera back towards its original position before the target ray was compromised”](teaches preserving the original camera view and pushing the camera back corresponding to switching back to the first position).
Tsuda and Myhill are considered to be analogous to the claimed invention because they are in the same field of displaying virtual objects. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Tsuda to incorporate Myhill’s teachings of restoring the displayed view back toward its original area. The motivation for such a combination would provide the benefit of straightforward user control.
Claim 14 is an apparatus claim corresponding to method claim 4 without any additional limitations. Thus, claim 14 is rejected for the same reasons as claim 4 above.
Claim 17 is an apparatus claim corresponding to method claim 7 without any additional limitations. Thus, claim 17 is rejected for the same reasons as claim 7 above.
Claim 18 is an apparatus claim corresponding to method claim 8 without any additional limitations. Thus, claim 18 is rejected for the same reasons as claim 8 above.
Claim 19 is an apparatus claim corresponding to method claim 9 without any additional limitations. Thus, claim 19 is rejected for the same reasons as claim 9 above.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AHMED TAHA whose telephone number is (571)272-6805. The examiner can normally be reached 8:30 am - 5 pm, Mon - Fri.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, XIAO WU can be reached at (571)272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AHMED TAHA/Examiner, Art Unit 2613
/XIAO M WU/Supervisory Patent Examiner, Art Unit 2613