DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This office action is responsive to the amendment received 12/19/2025.
In the response to the Non-Final Office Action 09/24/2025, the applicant states that
Claims 1-4, 6-8, 10-13, 15-17, 19, and 20 have been amended. Claims 1-20 are pending and under examination.
Claims 1-4, 6-8, 10-13, 15-17, 19, and 20 have been amended. In summary, claims 1-20 are pending in current application.
Response to Arguments
Applicant's arguments filed 12/19/2025 have been fully considered but they are not persuasive.
Regarding to the objection of specification, the amendment has cured the basis of objection of specification. Therefore, the objection of specification is hereby withdrawn.
Regarding to 35 U.S.C 101 rejection of claim 20, the amendment has cured the basis of 35 U.S.C 101 rejection. Therefore, the 35 U.S.C 101 rejection of claim 20 is hereby withdrawn.
Regarding to claim 1, the applicant argues that Fein and Kunkel, whether taken alone or in combination, fail to disclose or suggest "determining an influential object of the target object according to the target object and the reference point," and "determining a position located on the influential object as the second position." The arguments have been fully considered, but they are not persuasive. The examiner cannot concur with the applicant for following reasons:
Kunkel discloses “determining an influential object of the target object according to the target object and the reference point”. For example, in Fig. 7B and paragraph [0077], Kunkel teaches a content analyzer of the system determines that only visual objects 702 and 706 are visible, and that the visual object 704 is occluded by an influential object, i.e. visual object 706. In Fig. 7D and paragraph [0079], Kunkel teaches a visual object 704 is occluded by visual object 706 as illustrated in Fig. 7D.
PNG
media_image1.png
250
288
media_image1.png
Greyscale
; Kunkel further teaches visual object 706 is an influential object; Kunkel further more teaches a visual object 704 is one of target objects.
Kunkel discloses “determining a position located on the influential object as the second position”. For example, in Fig. 2 and paragraph [0047], Kunkel teaches the location is coordinates in a 3D reference frame; Kunkel teaches the content analyzer 107 receives and updates reference location coordinates from a user interface. In paragraph [0073], Kunkel teaches the system continues to track (604) the objects after repositioning; Kunkel further teaches tracking the position of object. In paragraph [0074], Kunkel teaches determining that, due to movement of the object, that object appears too small, too dark, or too bright according to one or more threshold values specified in a rule specifying acceptable size and brightness limitations on the object. In paragraph [0075], Kunkel teaches the system the system then continues to track (604) the object and the movement of object, i.e. positions. In Fig. 7A and [0076], Kunkel teaches tracking visual object 706 as illustrated in Fig. 7A. In Fig. 7C and paragraph [0078], Kunkel teaches the content analyzer determines that by moving the content capturing device higher, the occlusion can be avoided. In Fig. 7D and paragraph [0079], Kunkel teaches a visual object 704 is occluded by visual object 706 as illustrated in Fig. 7D.
PNG
media_image1.png
250
288
media_image1.png
Greyscale
; Kunkel further teaches visual object 706 is an influential object; Kunkel further more teaches a visual object 704 is one of target objects.
Claims 10 and 19 are not allowable due to the similar reasons as discussed above.
Claims 2-6, 7-9, 11-15, 16-18, and 20 are not allowable due to the similar reasons as discussed above.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-5, 7-14, and 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Fein (US 20140098137 A1 ) in view of Kunkel (US 20180234612 A1).
Regarding to claim 1 (Currently Amended), Fein discloses a data processing method (Fig. 3A; [0060]: a specifically-designed AR system in the form of video goggles; [0061]: capture actual scenes of real-world environments in order to generate augmented views of those actual scenes; [0065]: display and overlay computer generated data and images onto portions of views of actual scenes of the real-world environment) comprising:
obtaining a first output image about a first space in a virtual space based on a first value, an output parameter being the first value and the first space including a target object (Fig. 5A; [0080]: the AR device 70* of FIG. 7A captures and display an actual view 50a of a scene from the real environment; the view 50a includes the storefront of the retail business, i.e. a target object; Fig. 5B; [0081]: represent the view displayed by the AR device 70* of FIG. 7A or 7B at a first point; Fig. 6A; [0085]: the initial position, e.g., position 1, of the user 62);
in response to a first condition being met, adjusting the output parameter from the first value to a second value (Fig. 5D; [0083]: the exemplary user 62 moves away, i.e., first condition, from the first actual scene; Fig. 6; [0086]: as the user 62 moves away, i.e. first condition, from the retail business, the user 62 moves to position 2); and
after the output parameter is adjusted to be the second value, obtaining a second output image of a second space in the virtual space based on the second value (Fig. 5D; [0083]: AR device 70* captures and displays view 50d at a third point as the exemplary user 62 moves away from the first actual scene; Fig. 5A; 5B; 5C; 5D; 5E; 5F; Fig. 6A; [0084]: the movements of the exemplary user 62, when the AR device 70* is capturing and/or displaying the various views, are generally illustrated in FIG. 6A),
wherein:
the first value includes a first position and the second value includes a second position, the first position and the second position corresponding to the position of the reference point (Fein; Fig. 5A; Fig. 5B; [0080-0081]: represent the view displayed by the AR device 70* of FIG. 7A or 7B at a first point; Fig. 5D; [0083]: AR device 70* captures and displays view 50d at a third point as the exemplary user 62 moves away from the first actual scene; Fig. 6A; [0085]: the initial position, e.g., position 1, of the user 62);
Fein fails to explicitly disclose:
the second space including the target object;
the output parameter includes a position of a reference point in the virtual space; and
adjusting the output parameter from the first value to the second value includes:
determining an influential object of the target object according to the target object and the reference point;
determining a position located on the influential object as the second position; and
adjusting the output parameter from the first position to the second position.
In same field of endeavor, Kunkel teaches:
the second space including the target object (Fig. 7B; [0077]: a digital image is taken by the content capturing device of the visual objects at a first location; Fig. 7D; Fig. 7E; [0081]: a digital image is taken by the content capturing device of the visual objects at a second location;
PNG
media_image2.png
236
292
media_image2.png
Greyscale
;
PNG
media_image3.png
288
306
media_image3.png
Greyscale
; second space includes target 702. 704, and 710 as illustrated in Fig. 7D and Fig. 7E);
the output parameter includes a position of a reference point in the virtual space (Kunkel; Fig. 7B; [0077]: a digital image is taken by the content capturing device of the visual objects at a first location); and
adjusting the output parameter from the first value to the second value (same as rejected above) includes:
determining an influential object of the target object according to the target object and the reference point (Kunkel; Fig. 7B; [0077]: a content analyzer of the system determines that only visual objects 702 and 706 are visible, and that the visual object 704 is occluded by an influential object);
determining a position located on the influential object as the second position (Kunkel; Fig. 2; paragraph [0047]: the location is coordinates in a 3D reference frame; the content analyzer 107 receives and updates reference location coordinates from a user interface; [0073]: the system continues to track (604) the objects after repositioning; track the position of object; [0074]: determine that, due to movement of the object, that object appears too small, too dark, or too bright according to one or more threshold values specified in a rule specifying acceptable size and brightness limitations on the object; [0075]: the system the system then continues to track (604) the object and the movement of object, i.e. positions; Fig. 7A; [0076]: track visual object 706 as illustrated in Fig. 7A; Fig. 7C; [0078]: the content analyzer determines that by moving the content capturing device higher, the occlusion can be avoided; Fig. 7D; [0079]: a visual object 704 is occluded by visual object 706 as illustrated in Fig. 7D.
PNG
media_image1.png
250
288
media_image1.png
Greyscale
; the visual object 706 is an influential object.); and
adjusting the output parameter from the first position to the second position (Kunkel; Fig. 7D; Fig. 7E; [0078-0079]; Fig. 7E; [0081]: a digital image is taken by the content capturing device of the visual objects at a second location).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Fein to include the second space including the target object; the output parameter includes a position of a reference point in the virtual space; and adjusting the output parameter from the first value to the second value includes: determining an influential object of the target object according to the target object and the reference point; determining a position located on the influential object as the second position; and adjusting the output parameter from the first position to the second position as taught by Kunkel. The motivation for doing so would have been to improve cinematography; to avoid the occlusion; to capture a digital image at second location and to avoid occlusion as taught by Kunkel in Fig. 7E, paragraphs [0075], [0078], and [0081].
Regarding to claim 2 (Currently Amended), Fein in view of Kunkel discloses the method according to claim 1, wherein:
obtaining the first output image of the first space in the virtual space based on the first value (same as rejected in claim 1) includes: obtaining the first output image of the first space in the virtual space based on a reference point located at the first position (Fein; Fig. 5A; [0080]: the AR device 70* of FIG. 7A captures and display an actual view 50a of a scene from the real environment; the view 50a includes the storefront of the retail business, i.e. a target object; Fig. 5B; [0081]: represent the view displayed by the AR device 70* of FIG. 7A or 7B at a first point; Fig. 6A; [0085]: the initial position, e.g., position 1, of the user 62);
obtaining the second output image of the second space in the virtual space based on the second value (same as rejected in claim 1) includes: obtaining the second output image of the second space in the virtual space based on a reference point located at the second position (Fein; Fig. 5D; [0083]: AR device 70* captures and displays view 50d at a third point as the exemplary user 62 moves away from the first actual scene; Fig. 5A; 5B; 5C; 5D; 5E; 5F; Fig. 6A; [0084]: the movements of the exemplary user 62, when the AR device 70* is capturing and/or displaying the various views, are generally illustrated in FIG. 6A).
Same motivation of claim 1 is applied here.
Regarding to claim 3 (Currently Amended), Fein in view of Kunkel discloses the method according to claim 1, wherein determining the influential object of the target object according to the target object and the reference point (same as rejected in claim 2) includes:
obtaining a first target position of the target object in the virtual space and a second target position of the reference point in the virtual space (Kunkel; Fig. 7C; [0078]: the content analyzer determines that by moving the content capturing device higher, i.e. a second position, the occlusion can be avoided; Fig. 7D; [0079]: the content analyzer determines that by moving the content capturing device to the left, the occlusion can be avoided);
based on the first target position and the second target position, determining a reference line between the target object and the reference point (Kunkel; [0089]: the system determines (904), from a second digital image capture by the content capturing device, that the first object is obstructed by the second object; the obstruction is caused by movement of the objects from one position to another position; the system tracks a movement of the first object or the second object in reference to one or more stationary objects of the visual objects); and
determining objects that the reference line passes through as the influential object of the target object (Fein; Fig. 5E; [0087]: display a view 50e of FIG. 5E that includes augmentation 51b, i.e. an influential object;
PNG
media_image4.png
212
284
media_image4.png
Greyscale
;Fig. 6C; [0108]: FIG. 6B illustrates the ocular focus (e.g., eye focus) of the user 62 with respect to the store front scene from FIGS. 5A, 5B, 5C, 5D, and/or 5E that may be detected or sensed by the AR device 70*. FIG. 6C, on the other hand, illustrates an example scanning behavior of the user 62 that may be exhibited by the user 62 when the user 62 is visually searching for a previously displayed augmentation;
PNG
media_image5.png
374
760
media_image5.png
Greyscale
).
Same motivation of claim 1 is applied here.
Regarding to claim 4 (Currently Amended), Fein in view of Kunkel discloses the method according to claim 1, wherein determining the position located on the influential object as the second position includes:
in response to presence of multiple of influential objects, selecting an object of the influential objects which is closest to the target object as a target influential object (Kunkel; Fig. 7B; [0077]: a content analyzer of the system determines that only visual objects 702 and 706 are visible, and that the visual object 704 is occluded; Fig. 7D; [0079]: visual object 706 includes left side, right side, and head; the left side of visual object 706 overlaps with visual object 704, i.e., closest; the left side of visual object 706 is selected as influential; the content analyzer determines that by moving the content capturing device to the left, the occlusion can be avoided;
PNG
media_image6.png
242
274
media_image6.png
Greyscale
; [0088]: the system identifies multiple individual visual objects including a first object and a second object); and
determining a position located on the target influential object as the second position (Kunkel; Fig. 7C; [0078]: the content analyzer determines that by moving the content capturing device higher, i.e. a second position, the occlusion can be avoided; Fig. 7D; [0079]: the content analyzer determines that by moving the content capturing device to the left, the occlusion can be avoided).
Same motivation of claim 1 is applied here.
Regarding to claim 5 (Original), Fein in view of Kunkel discloses the method according to claim 1, wherein:
in response to the first condition being met, adjusting the output parameter from the first value to the second value (same as rejected in claim 1) includes:
in response to the first condition and a second condition being met, adjusting the output parameter from the first value to the second value (Fein; [0080]: views 50b, 50c, 50d, 50e, and 50f are exemplary views that may be displayed by the AR device 70* in response to one or more behaviors, e.g., ocular or body movements, of the user 62 as illustrated in, for example, in FIGS. 6A, 6B, 6C, 6D, and/or 6E, more behaviors are met).
Regarding to claim 7 (Currently Amended), Fein in view of Kunkel discloses the method according to claim 5, wherein, in response to the first condition and the second condition being met, adjusting the output parameter from the first value to the second value (same as rejected in claim 5), includes:
in response to the influential object of the target object being detected, determining that the first condition is met (Kunkel; Fig. 7B; [0077]: a content analyzer of the system determines that only visual objects 702 and 706 are visible, and that the visual object 704 is occluded; Fig. 7D; [0079]: visual object 706 includes left side, right side, and head; the left side of visual object 706 overlaps with visual object 704, i.e., closest; the left side of visual object 706 is selected as influential; the content analyzer determines that by moving the content capturing device to the left, the occlusion can be avoided;
PNG
media_image6.png
242
274
media_image6.png
Greyscale
; [0088]: the system identifies multiple individual visual objects including a first object and a second object); and
in response to the influential object carrying a target label, determining that the second condition is met, and adjusting the output parameter from the first value to the second value (Fig. 7D; [0079-0080]: the content analyzer determine that by moving the content capturing device to the left, the occlusion can be avoided; visual object 706 is one of target and carry a target label as illustrated in Fig. 7D; Fig. 7E; [0081]: a digital image is taken by the content capturing device of the visual objects at a second location).
Same motivation of claim 1 is applied here.
Regarding to claim 8 (Currently Amended), Fein in view of Kunkel discloses the method according to claim 5, wherein, in response to the first condition and the second condition being met, adjusting the output parameter from the first value to the second value (same as rejected in claim 5), includes:
in response to the influential object of the target object being detected, determining that the first condition is met (Kunkel; Fig. 7B; [0077]: a content analyzer of the system determines that only visual objects 702 and 706 are visible, and that the visual object 704 is occluded; Fig. 7D; [0079]: visual object 706 includes left side, right side, and head; the left side of visual object 706 overlaps with visual object 704, i.e., closest; the left side of visual object 706 is selected as influential; the content analyzer determines that by moving the content capturing device to the left, the occlusion can be avoided;
PNG
media_image6.png
242
274
media_image6.png
Greyscale
; [0088]: the system identifies multiple individual visual objects including a first object and a second object); and
in response to a number of times that the influential object is detected meeting a consecutive preset number, determining that the second condition is met, and adjusting the output parameter from the first value to the second value (Kunkel; Fig. 7C; [0078]: the content analyzer can determine that by moving the content capturing device higher, the occlusion can be avoided; Fig. 7D; [0079]: a second response of the system provided as a result of the violation; the content analyzer determines that by moving the content capturing device to the left, the occlusion can be avoided; [0080]: the controller chooses whether to apply the first response or second response based on various factors, including, for example, a pre-set preference on whether to increase altitude when possible, a limit in space; Fig. 7E; [0081]: a digital image is taken by the content capturing device of the visual objects at a second location).
Regarding to claim 9 (Original), Fein in view of Kunkel discloses the method according to claim 8, wherein, in response to the number of times that the influential object is detected meeting the consecutive preset number, determining that the second condition is met and adjusting the output parameter from the first value to the second value (same as rejected in claim 8) include:
in response to the number of times that the influential object is detected meeting the consecutive preset number and target distances between positions of the preset number of reference points and the position of the influential object satisfying a similarity condition, determining that the second condition is met and adjusting the output parameter from the first value to the second value (Kunkel; Fig. 7C; [0078]: the content analyzer can determine that by moving the content capturing device higher, the occlusion can be avoided; Fig. 7D; [0079]: a second response of the system provided as a result of the violation; the content analyzer determines that by moving the content capturing device to the left, the occlusion can be avoided; [0080]: the controller chooses whether to apply the first response or second response based on various factors, including, for example, a pre-set preference on whether to increase altitude when possible, a limit in space; Fig. 7E; Fig. 7F; [0081]: a digital image is taken by the content capturing device of the visual objects at a second location; a proportion between the visual objects, as shown in boxes 718 and 710, is relatively uniform, i.e., similarity; FIG. 7F is a digital image taken by the content capturing device of the visual objects at a third location that is closer to the visual objects 702, 704 and 706 than the second location is.).
Same motivation of claim 1 is applied here.
Regarding to claim 10 (Currently Amended), Fein discloses an electronic device (Fig. 3A; [0060]: a specifically-designed AR system in the form of video goggles; [0061]: capture actual scenes of real-world environments in order to generate augmented views of those actual scenes; [0065]: display and overlay computer generated data and images onto portions of views of actual scenes of the real-world environment), comprising a processor, a memory, and a communication bus (Fig. 7A; [0113]: one or more processors 116, a memory 114, communication bus, and a network interface 112 as illustrated in Fig. 7A), wherein:
the communication bus is configured to realize a communication connection between the process and the memory (Fig. 7A; Fig. 7B; [0113]: one or more processors 116, a memory 114, communication bus, and a network interface 112 as illustrated in Fig. 7A);
the memory is configured to store an information processing program ([0113]: the memory 114 stores one or more applications 160; Fig. 7A; Fig. 7B; [0115]: one or more processors 116 executes one or more computer readable instructions 152 stored in memory 114; [0116]); and
the processor is configured to execute the information processing program stored in the memory, to ( Fig. 7A; Fig. 7B; [0115]: one or more processors 116 executes one or more computer readable instructions 152 stored in memory 114):
the rest claim limitations are similar to claim limitations recited in claim 1. Therefore, same rational used to reject claim 1 is also used to reject claim 10.
Regarding to claim 11 (Currently Amended), Fein in view of Kunkel discloses the electronic device according to claim 10, wherein:
The rest claim limitations are similar to claim limitations recited in claim 2. Therefore, same rational used to reject claim 2 is also used to reject claim 11.
Regarding to claim 12 (Currently Amended), Fein in view of Kunkel discloses the electronic device according to claim 11, wherein the processor is further configured to (same as rejected in claim 10):
The rest claim limitations are similar to claim limitations recited in claim 3. Therefore, same rational used to reject claim 3 is also used to reject claim 12.
Regarding to claim 13 (Currently Amended), Fein in view of Kunkel discloses the electronic device according to claim 11, wherein the processor is further configured to (same as rejected in claim 10):
The rest claim limitations are similar to claim limitations recited in claim 4. Therefore, same rational used to reject claim 4 is also used to reject claim 13.
Regarding to claim 14 (Original), Fein in view of Kunkel discloses the electronic device according to claim 10, wherein:
The rest claim limitations are similar to claim limitations recited in claim 5. Therefore, same rational used to reject claim 5 is also used to reject claim 14.
Regarding to claim 16 (Currently Amended), Fein in view of Kunkel discloses the electronic device according to claim 14, wherein the processor is further configured to (same as rejected in claim 10):
The rest claim limitations are similar to claim limitations recited in claim 7. Therefore, same rational used to reject claim 7 is also used to reject claim 16.
Regarding to claim 17 (Currently Amended), Fein in view of Kunkel discloses the electronic device according to claim 14, wherein the processor is further configured to (same as rejected in claim 10):
The rest claim limitations are similar to claim limitations recited in claim 8. Therefore, same rational used to reject claim 8 is also used to reject claim 17.
Regarding to claim 18 (Original), Fein in view of Kunkel discloses the electronic device according to claim 17, wherein the processor is further configured to (same as rejected in claim 10):
The rest claim limitations are similar to claim limitations recited in claim 9. Therefore, same rational used to reject claim 9 is also used to reject claim 18.
Regarding to claim 19 (Currently Amended), Fein discloses a non-transitory computer readable storage medium, configured to store an information processing program (Fig. 3A; [0060]: a specifically-designed AR system in the form of video goggles; [0061]: capture actual scenes of real-world environments in order to generate augmented views of those actual scenes; [0065]: display and overlay computer generated data and images onto portions of views of actual scenes of the real world environment; Fig. 7A; [0113]: the memory 114 stores one or more applications; [0115]: store one or more computer readable instructions 152 in memory 114; [0116]), wherein:
the information processing program is configured to be executed by a device where the non-transitory computer readable storage medium is located, to control the device to (Fig. 7A; [0113]: the memory 114 stores one or more applications; [0115]: store one or more computer readable instructions 152 in memory 114; [0116]):
the rest claim limitations are similar to claim limitations recited in claim 1. Therefore, same rational used to reject claim 1 is also used to reject claim 19.
Regarding to claim 20 (Currently Amended), Fein in view of Kunkel discloses the non-transitory computer readable storage medium according to claim 19, wherein:
the rest claim limitations are similar to claim limitations recited in claim 2. Therefore, same rational used to reject claim 2 is also used to reject claim 20.
Claims 6 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Fein (US 20140098137 A1 ) in view of Kunkel (US 20180234612 A1), and further in view of Stebbins (US 20210038985 A1).
Regarding to claim 6 (Currently Amended), Fein in view of Kunkel discloses the method according to claim 5, wherein, in response to the first condition and the second condition being met, adjusting the output parameter from the first value to the second value (same as rejected in claim 5), includes:
in response to the influential object of the target object being detected, determining that the first condition is met and using a moment when the influential object is detected as a first moment (Kunkel; Fig. 7B; [0077]: a content analyzer of the system determines that only visual objects 702 and 706 are visible, and that the visual object 704 is occluded; [0089]: the system determines (904), from a second digital image capture by the content capturing device, that the first object is obstructed by the second object);
Fein in view of Kunkel fails to explicitly disclose:
in response to the influential object being undetected within a first time-interval from the first moment, determining that the second condition is met, and adjusting the output parameter from the first value to the second value at a second moment after the first time-interval elapses from the first moment; and
in the first time-interval, in response to the influential object being detected at a third moment, using the third moment as the first moment.
In same field of endeavor, Stebbins teaches:
in response to the influential object being undetected within a first time-interval from the first moment, determining that the second condition is met, and adjusting the output parameter from the first value to the second value at a second moment after the first time-interval elapses from the first moment ([0035]: at any given instance in time, the position of the avatar 102 may be identified at points A, A1, A2, A3, A4; [0038]: determine whether to change the camera view can be based on time durations; Fig. 5; [0078]: the game application 412 determines whether the graphical object occludes the avatar 102 in the current camera view, and if so, by how much; [0079]: a time threshold, such as whether the speed of traversal of the avatar will cause an occlusion to occur for only a short duration of time or for a relatively longer duration of time; [0081]: changes the camera view); and
in the first time-interval, in response to the influential object being detected at a third moment, using the third moment as the first moment (Fig. 2A; [0031]: the windows 202 and 204 may be transparent or semi-transparent, and the window frame 206 may have a narrower profile relative to the profile of the avatar 102;
PNG
media_image7.png
282
720
media_image7.png
Greyscale
; [0032]: the camera that generates the camera view 250 has now moved back to its original position; Fig. 5; [0083]: the camera view may be changed again so as to move backward away from the avatar).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Fein in view of Kunkel to include in response to the influential object being undetected within a first time-interval from the first moment, determining that the second condition is met, and adjusting the output parameter from the first value to the second value at a second moment after the first time-interval elapses from the first moment; and in the first time-interval, in response to the influential object being detected at a third moment, using the third moment as the first moment as taught by Stebbins. The motivation for doing so would have been to determine whether to change the camera view can be based on time durations; to avoid rendering the occluding graphical object during the transition; to determine whether the graphical object occludes the avatar 102 in the current camera view, and if so, by how much as taught by Stebbins in paragraphs [0038], [0042], and [0078].
Regarding to claim 15 (Currently Amended), Fein in view of Kunkel discloses the electronic device according to claim 14, wherein the processor is further configured to (same as rejected in claim 10):
The rest claim limitations are similar to claim limitations recited in claim 6. Therefore, same rational used to reject claim 6 is also used to reject claim 15.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Hai Tao Sun whose telephone number is (571)272-5630. The examiner can normally be reached 9:00AM-6:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Hajnik can be reached at 5712727642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HAI TAO SUN/Primary Examiner, Art Unit 2616