DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Drawings The drawings are objected to because i n Fig. 9C, “c4” should read “c5” and “c6” should read “c4” to be consistent with Fig. 9A and 9B . Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Objections Claims 1-20 are objected to because of the following informalities: In claim 1 line 13, “the need” should read “a need”. In claim 9 line 14, “the desire” should read “a desire”. In claim 16 line 19, “the desire” should read “a desire”. Claims 2-8 are objected to because of their dependence on claim 1. Claims 10-15 are objected to because of their dependence on claim 9. Claims 17-20 are objected to because of their dependence on claim 16. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis ( i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim s 1-3, 5-6, 9 -11, 13-14, 16-17, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Shirai et al. ( US 20230162433 A1 ) in view of Torama et al. ( US 20240281059 A1 ) , hereinafter Shirai and Torama respectively . Regarding claim 1, Shirai teaches a method of taking photo in a virtual reality environment ( Paragraph 0066, 0096, 0101 – “The server controller 13 cooperates with the terminal device 20 to display an image(s) of a virtual space on the display portion 23. The image of the virtual space is updated according to the progress of the virtual reality and the user operation…a predetermined event occurs when at least one of the following meets a first predetermined condition: ( i ) the respective positions of the first user avatar M1 and the second user avatar M1 in the virtual space, and (ii) a first relative positional relationship between the first user avatar M1 and the second user avatar M1 in the virtual space…the predetermined event includes an image acquisition event such as commemorative picture taking. In this case, the image acquisition event may be an event in which an image including the first user avatar M1 and the second user avatar M1 with a location as a background is acquired”; Note: a method of taking a picture in a virtual reality environment is provided ) , the method comprising : defining a group that includes multiple entities ( Paragraph 0109-0110 – “when unknown (non-friend) avatars exist in the virtual space, the unknown avatars will not appear in the image…when a commemorative picture is taken for an image acquisition event, other user avatars M1 located at the same location in the past may appear according to a selection (request) by the user that causes the image acquisition event to occur. Thereby, a group picture-like image containing many user avatars at the image acquisition event can be obtained…the user may select an avatar with whom to take the image (e.g., from among multiple nearby avatars)”; Note: a group of avatars is defined based on user selection and whether or not the user knows the other avatars ) ; detecting multiple directions that the multiple entities are facing based upon orientations of the multiple entities ( Paragraph 0181, 0183-0184 , 0282 – “The relative positional relationship between the first user avatar M1 and the second user avatar(s) M1 in the virtual space is a concept including relative distance, relative orientation, and the like. In this embodiment, the first relative relationship information includes the direction guide image(s) G6161 and the distance guide image(s) G6162 as described above with reference to FIG. 6…the direction a related to the direction guide image G6161 may be calculated based on the following calculation formula, for example. α= ϕ+θ [Formula 1] Here, ( Latp , Lngp ) is position information (latitude, longitude) of the first user avatar M1, and ( Latt , Lngt ) is position information (latitude, longitude) of the second user avatar M1 in the coordinate system associated with the real space image…the orientation ϕ of the first user avatar M1 may be defined by rotation angles around the x1 axis, the y1 axis, and the z1 axis as the orientation of the face part … the history data of orientation information of each user avatar M1 may also be stored in association with the movement history data of each user avatar M1” ; Note: the directions and orientations of the face of each user avatar are detected) ; identifying multiple extension lines that start from the multiple entities in the group and extend in the multiple directions ( Paragraph 0282, 0289 – “the history data of orientation information of each user avatar M1 may also be stored in association with the movement history data of each user avatar M1. In this case, it is possible to obtain information such as what line-of-sight direction each user avatar M1 had at each coordinate…The line-of-sight direction of each user avatar M1 may be evaluated based on the history data of the orientation information described above”; Note: the line-of-sight, which is equivalent to the extension line, of each user is identified ) ; defining a camera line based on an orientation ( Fig. 16, Paragraph 0171 – “the first user avatar M1 is positioned in front of the virtual camera 60. At this time, the line-of-sight direction of the virtual camera 60 corresponds to the orientation toward the first user avatar M1”; Note: a line-of-sight of the virtual camera is a camera line; see screenshot of Fig. 16 below, which shows the camera location and camera line ) ; determining a camera location along the camera line ( Fig. 16, Paragraph 0169, 0171 – “The position of the virtual camera 60 corresponds to the first viewpoint related to the terminal image for the first user, and the line-of-sight direction of the virtual camera 60 (the direction of the arrow R13) shows the line-of-sight direction (direction of viewing the real space image) from the first viewpoint when generating the terminal image for the first user…the line-of-sight direction of the virtual camera 60 corresponds to the orientation toward the first user avatar M1. In other words, the first user avatar M1 moves within the field of view of virtual camera 60”; Note: the camera location is determined along a line-of-sight of a viewpoint, which is a camera line; see screenshot of Fig. 16 below, which shows the camera location and camera line ) ; taking an image of virtual view of the multiple entities by a virtual camera ( Paragraph 0081, 0108 – “The operation button B602 is a shutter button and is operated when taking a picture (taking a picture in the virtual space) such as in commemorative picture taking described below…In the example shown in FIGS. 7 and 8, the picture taken in the commemorative picture taking (for example, a “two shot” picture capturing two people) shows the first user avatar M1 and the second user avatar M1”; Note: an image is taken of two user avatars in a virtual space. The user avatars are the multiple entities, and it is implied that the image is taken by a virtual camera since it is taken in a virtual space ) ; and meeting the need of taking photo by the multiple entities by providing the image as a photo to the multiple entities in the group ( Paragraph 0096, 0101, 0104, 0208 – “a predetermined event occurs when at least one of the following meets a first predetermined condition: ( i ) the respective positions of the first user avatar M1 and the second user avatar M1 in the virtual space, and (ii) a first relative positional relationship between the first user avatar M1 and the second user avatar M1 in the virtual space…the predetermined event includes an image acquisition event such as commemorative picture taking. In this case, the image acquisition event may be an event in which an image including the first user avatar M1 and the second user avatar M1 with a location as a background is acquired… when the first predetermined condition is satisfied, the operation button B602 (see FIG. 6), which is the shutter button, is made active (operable) as preparation processing for the image acquisition event. When the first user operates the operation button B602, an image acquisition event occurs. Alternatively, an image may be acquired when a user or an avatar performs a specific action other than pressing the shutter button…on a picture reproduction page, the user can view the image(s) pertaining to the event image data based on the accessed image generation condition”; Note: the image acquisition event provides the captured image to the users. It is implied that both users receive the image since they both satisfied the predetermined condition and participated in the event ) . Screenshot of Fig. 16 (taken from Shirai) Shirai does not teach identifying one or more intersections associated with the multiple directions; identifying one or more connection lines that connect a center of the group to the one or more intersections; nor the “connection lines” in the limitation: “ defining a camera line based on the one or more connection lines ”. However, Torama teaches identifying one or more intersections associated with the multiple directions ( Paragraph 0035 – “the eye tracking device 1 according to the present disclosed technology may calculate an intersection between a line of sight of a right eye and a line of sight of a left eye”; Note: an intersection made by lines-of-sight are identified. The lines-of-sight correspond to directions ) ; and identifying one or more connection lines that connect a center of the group to the one or more intersections ( Paragraph 0035 – “the eye tracking device 1 according to the present disclosed technology may calculate an intersection between a line of sight of a right eye and a line of sight of a left eye, and output, as a line-of-sight direction vector, a vector starting from the midpoint between the eyeball center position of the right eye and the eyeball center position of the left eye and ending at the intersection”; Note; the vector starting from the midpoint to the intersection is equivalent to the connection line ). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Shirai to incorporate the teachings of Torama to identify intersections associated with the directions for the benefit of helping find an overall line-of-sight for multiple lines-of-sight, which would assist in situations, like in Shirai, when there are multiple avatars and lines-of-sight while capturing a picture. Specifically, in Shirai, “ the line-of-sight direction of the virtual camera 60 corresponds to the orientation toward the first user avatar M1 ” (Paragraph 0171), but if there are multiple avatars, the line-of-sight of the virtual camera would need to correspond to the orientation toward the multiple avatars. Finding a converging point of the orientation of each of the avatars would help determine an optimal orientation of the virtual camera toward the multiple avatars for taking a picture that generally faces at where the avatars are facing. It also would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Shirai to incorporate the teachings of Torama to identify a connection line connecting a center of a group to the intersection because connecting the intersection with a group center forms a line that can represent the line-of-sight of the whole group, which is a more accurate representation than choosing the line-of-sight of a single entity to represent the whole group. Then, the overall line-of-sight can be used as an orientation of the virtual camera toward multiple avatars when taking a picture of the avatars. While Torama is directed to the lines-of-sight of individual eyes, the process can still be applied to other entities, such as the avatars in Shirai. Additionally, a person of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that the orientation of Shirai could have been substituted for the line-of-sight direction vector (connection line) of Torama because both the orientation and line-of-sight direction vector serve the purpose of representing the direction an object is facing . Furthermore, a person of ordinary skill in the art would have been able to carry out the substitution. Finally, the substitution achieves the predictable result of defining a camera line based on the direction an object is facing . Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to substitute the orientation of Shirai for the line-of-sight direction vector of Torama according to known methods to yield the predictable result of defining a camera line based on the direction an object is facing. By substituting Shirai’s orientation for the line-of-sight direction vector of Torama , Shirai modified by Torama teaches defining a camera line based on the one or more connection lines . The line-of-sight direction vector of Torama is equivalent to a connection line. Regarding claim 2, Shirai in view of Torama teaches the method of claim 1 . Shirai further teaches wherein defining the group comprises identifying the multiple entities from a cluster of entities based on a size of the group and distances among the entities in the cluster ( Paragraph 0096-0097, 0101, 0110 – “a predetermined event occurs when at least one of the following meets a first predetermined condition: ( i ) the respective positions of the first user avatar M1 and the second user avatar M1 in the virtual space, and (ii) a first relative positional relationship between the first user avatar M1 and the second user avatar M1 in the virtual space… the first predetermined condition is satisfied when the first user avatar M1 and the second user avatar M1 are in close proximity to each other… the predetermined event includes an image acquisition event such as commemorative picture taking…when a commemorative picture is taken for an image acquisition event, other user avatars M1 located at the same location in the past may appear according to a selection (request) by the user that causes the image acquisition event to occur. Thereby, a group picture-like image containing many user avatars at the image acquisition event can be obtained…the user may select an avatar with whom to take the image (e.g., from among multiple nearby avatars). Also, if no other avatars are nearby, an image may be taken of the first user avatar M1 alone”; Note: avatars are identified for taking a photo based on distance between the avatars and the number of nearby avatars (size of group) . All the avatars in the area are equivalent to a cluster ). Regarding claim 3, Shirai in view of Torama teaches the method of claim 1 . Shirai does not teach wherein identifying the one or more intersections comprises identifying the one or more intersections made by the multiple extension lines. However, Torama teaches identifying the one or more intersections made by the multiple extension lines ( Paragraph 0035 – “the eye tracking device 1 according to the present disclosed technology may calculate an intersection between a line of sight of a right eye and a line of sight of a left eye”; Note: an intersection made by lines-of-sight are identified. The lines-of-sight are equivalent to the extension lines ). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Shirai to incorporate the teachings of Torama to identify intersections made by the extension lines for the benefit of helping find an overall line-of-sight for multiple lines-of-sight, which would assist in situations, like in Shirai, when there are multiple avatars and lines-of-sight while capturing a picture. Specifically, in Shirai, “ the line-of-sight direction of the virtual camera 60 corresponds to the orientation toward the first user avatar M1 ” (Paragraph 0171), but if there are multiple avatars, the line-of-sight of the virtual camera would need to correspond to the orientation toward the multiple avatars. Finding a converging point of the orientation line of each of the avatars would help determine an optimal orientation of the virtual camera toward the multiple avatars for taking a picture that generally faces at where the avatars are facing. Regarding claim 5, Shirai in view of Torama teaches the method of claim 1 . Shirai further teaches if there is only one orientation , defining the camera line as along the orientation ( Fig. 16, Paragraph 0171 – “the first user avatar M1 is positioned in front of the virtual camera 60. At this time, the line-of- sight direction of the virtual camera 60 corresponds to the orientation toward the first user avatar M1”; Note: a line-of-sight of the virtual camera is a camera line; see screenshot of Fig. 16 above , which shows the camera location and camera line . The camera line is defined along the orientation toward the avatar ). Shirai does not teach the connection line in the limitation: “ if there is only one connection line, defining the camera line as along the connection line; or if there are two connection lines, defining the camera line as along a line that bisects an ordinary angle formed by the two connection lines; or if there are more than two connection lines, defining the camera line as along a line that bisects an ordinary angle formed by two connection lines that pass through first two intersections made by majority of the multiple extension lines ” . However, Torama teaches that there is only one connection line ( Paragraph 0035 – “the eye tracking device 1 according to the present disclosed technology may calculate an intersection between a line of sight of a right eye and a line of sight of a left eye, and output, as a line-of-sight direction vector, a vector starting from the midpoint between the eyeball center position of the right eye and the eyeball center position of the left eye and ending at the intersection”; Note; the vector starting from the midpoint to the intersection is equivalent to the connection line. There is only one connection line ). A person of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that the orientation of Shirai could have been substituted for the line-of-sight direction vector (connection line) of Torama because both the orientation and line-of-sight direction vector serve the purpose of representing the direction an object is facing . Furthermore, a person of ordinary skill in the art would have been able to carry out the substitution. Finally, the substitution achieves the predictable result of defining a camera line based on the direction an object is facing . Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to substitute the orientation of Shirai for the line-of-sight direction vector of Torama according to known methods to yield the predictable result of defining a camera line based on the direction an object is facing. By substituting Shirai’s orientation for the line-of-sight direction vector of Torama , Shirai modified by Torama teaches defining a camera line as along the connection line . The line-of-sight direction vector of Torama is equivalent to a connection line. Regarding claim 6, Shirai in view of Torama teaches the method of claim 1 . Shirai further teaches if there is only one orientation , defining the camera line as along the orientation ( Fig. 16, Paragraph 0171 – “the first user avatar M1 is positioned in front of the virtual camera 60. At this time, the line-of-sight direction of the virtual camera 60 corresponds to the orientation toward the first user avatar M1”; Note: a line-of-sight of the virtual camera is a camera line; see screenshot of Fig. 16 above , which shows the camera location and camera line . The camera line is defined along the orientation toward the avatar). Shirai does not teach the connection line in the limitation: “ if there is only one connection line, defining the camera line as along the connection line; or if there are two connection lines, defining the camera line as along a line that bisects an ordinary angle formed by the two connection lines; or if there are more than two connection lines and there is a reflex angle formed by two adjacent connection lines, defining the camera line as along a line that bisects an explementary angle of the reflex angle formed by the two adjacent connection lines ” . However, Torama teaches that there is only one connection line ( Paragraph 0035 – “the eye tracking device 1 according to the present disclosed technology may calculate an intersection between a line of sight of a right eye and a line of sight of a left eye, and output, as a line-of-sight direction vector, a vector starting from the midpoint between the eyeball center position of the right eye and the eyeball center position of the left eye and ending at the intersection”; Note; the vector starting from the midpoint to the intersection is equivalent to the connection line. There is only one connection line ). A person of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that the orientation of Shirai could have been substituted for the line-of-sight direction vector (connection line) of Torama because both the orientation and line-of-sight direction vector serve the purpose of representing the direction an object is facing . Furthermore, a person of ordinary skill in the art would have been able to carry out the substitution. Finally, the substitution achieves the predictable result of defining a camera line based on the direction an object is facing . Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to substitute the orientation of Shirai for the line-of-sight direction vector of Torama according to known methods to yield the predictable result of defining a camera line based on the direction an object is facing. By substituting Shirai’s orientation for the line-of-sight direction vector of Torama , Shirai modified by Torama teaches defining a camera line as along the connection line . The line-of-sight direction vector of Torama is equivalent to a connection line. Regarding claim 9, Shirai teaches a non-transitory storage medium thereupon stored a set of computer-readable instructions that, when being executed by a computer, cause the computer to perform ( Claim 15 – “A non-transitory computer-readable medium storing a program causing a computer to execute…” ) : defining a group that includes multiple entities ( Paragraph 0109-0110 – “when unknown (non-friend) avatars exist in the virtual space, the unknown avatars will not appear in the image…when a commemorative picture is taken for an image acquisition event, other user avatars M1 located at the same location in the past may appear according to a selection (request) by the user that causes the image acquisition event to occur. Thereby, a group picture-like image containing many user avatars at the image acquisition event can be obtained…the user may select an avatar with whom to take the image (e.g., from among multiple nearby avatars)”; Note: a group of avatars is defined based on user selection and whether or not the user knows the other avatars ) ; detecting multiple directions that the multiple entities are facing based upon orientations of the multiple entities ( Paragraph 0181, 0183-0184 , 0282 – “The relative positional relationship between the first user avatar M1 and the second user avatar(s) M1 in the virtual space is a concept including relative distance, relative orientation, and the like. In this embodiment, the first relative relationship information includes the direction guide image(s) G6161 and the distance guide image(s) G6162 as described above with reference to FIG. 6…the direction a related to the direction guide image G6161 may be calculated based on the following calculation formula, for example. α= ϕ+θ [Formula 1] Here, ( Latp , Lngp ) is position information (latitude, longitude) of the first user avatar M1, and ( Latt , Lngt ) is position information (latitude, longitude) of the second user avatar M1 in the coordinate system associated with the real space image…the orientation ϕ of the first user avatar M1 may be defined by rotation angles around the x1 axis, the y1 axis, and the z1 axis as the orientation of the face part … the history data of orientation information of each user avatar M1 may also be stored in association with the movement history data of each user avatar M1” ; Note: the directions and orientations of the face of each user avatar are detected) ; identifying multiple extension lines that start from the multiple entities in the group and extend in the multiple directions ( Paragraph 0282, 0289 – “the history data of orientation information of each user avatar M1 may also be stored in association with the movement history data of each user avatar M1. In this case, it is possible to obtain information such as what line-of-sight direction each user avatar M1 had at each coordinate…The line-of-sight direction of each user avatar M1 may be evaluated based on the history data of the orientation information described above”; Note: the line-of-sight, which is equivalent to the extension line, of each user is identified ) ; defining a camera line based on an orientation ( Fig. 16, Paragraph 0171 – “the first user avatar M1 is positioned in front of the virtual camera 60. At this time, the line-of-sight direction of the virtual camera 60 corresponds to the orientation toward the first user avatar M1”; Note: a line-of-sight of the virtual camera is a camera line; see screenshot of Fig. 16 above , which shows the camera location and camera line ) ; determining a camera location along the camera line ( Fig. 16, Paragraph 0169, 0171 – “The position of the virtual camera 60 corresponds to the first viewpoint related to the terminal image for the first user, and the line-of-sight direction of the virtual camera 60 (the direction of the arrow R13) shows the line-of-sight direction (direction of viewing the real space image) from the first viewpoint when generating the terminal image for the first user…the line-of-sight direction of the virtual camera 60 corresponds to the orientation toward the first user avatar M1. In other words, the first user avatar M1 moves within the field of view of virtual camera 60”; Note: the camera location is determined along a line-of-sight of a viewpoint, which is a camera line; see screenshot of Fig. 16 above , which shows the camera location and camera line ) ; taking an image of virtual view of the multiple entities by a virtual camera ( Paragraph 0081, 0108 – “The operation button B602 is a shutter button and is operated when taking a picture (taking a picture in the virtual space) such as in commemorative picture taking described below…In the example shown in FIGS. 7 and 8, the picture taken in the commemorative picture taking (for example, a “two shot” picture capturing two people) shows the first user avatar M1 and the second user avatar M1”; Note: an image is taken of two user avatars in a virtual space. The user avatars are the multiple entities, and it is implied that the image is taken by a virtual camera since it is taken in a virtual space ) ; and meeting the desire of taking photo by the multiple entities by providing the image as a photo to the multiple entities in the group ( Paragraph 0096, 0101, 0104, 0208 – “a predetermined event occurs when at least one of the following meets a first predetermined condition: ( i ) the respective positions of the first user avatar M1 and the second user avatar M1 in the virtual space, and (ii) a first relative positional relationship between the first user avatar M1 and the second user avatar M1 in the virtual space…the predetermined event includes an image acquisition event such as commemorative picture taking. In this case, the image acquisition event may be an event in which an image including the first user avatar M1 and the second user avatar M1 with a location as a background is acquired… when the first predetermined condition is satisfied, the operation button B602 (see FIG. 6), which is the shutter button, is made active (operable) as preparation processing for the image acquisition event. When the first user operates the operation button B602, an image acquisition event occurs. Alternatively, an image may be acquired when a user or an avatar performs a specific action other than pressing the shutter button…on a picture reproduction page, the user can view the image(s) pertaining to the event image data based on the accessed image generation condition”; Note: the image acquisition event provides the captured image to the users. It is implied that both users receive the image since they both satisfied the predetermined condition and participated in the event ) . Shirai does not teach identifying one or more intersections associated with the multiple directions; identifying one or more connection lines that connect a center of the group to the one or more intersections; nor the “connection lines” in the limitation: “ defining a camera line based on the one or more connection lines ”. However, Torama teaches identifying one or more intersections associated with the multiple directions ( Paragraph 0035 – “the eye tracking device 1 according to the present disclosed technology may calculate an intersection between a line of sight of a right eye and a line of sight of a left eye”; Note: an intersection made by lines-of-sight are identified. The lines-of-sight correspond to directions ) ; and identifying one or more connection lines that connect a center of the group to the one or more intersections ( Paragraph 0035 – “the eye tracking device 1 according to the present disclosed technology may calculate an intersection between a line of sight of a right eye and a line of sight of a left eye, and output, as a line-of-sight direction vector, a vector starting from the midpoint between the eyeball center position of the right eye and the eyeball center position of the left eye and ending at the intersection”; Note; the vector starting from the midpoint to the intersection is equivalent to the connection line ). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Shirai to incorporate the teachings of Torama to identify intersections associated with the directions for the benefit of helping find an overall line-of-sight for multiple lines-of-sight, which would assist in situations, like in Shirai, when there are multiple avatars and lines-of-sight while capturing a picture. Specifically, in Shirai, “ the line-of-sight direction of the virtual camera 60 corresponds to the orientation toward the first user avatar M1 ” (Paragraph 0171), but if there are multiple avatars, the line-of-sight of the virtual camera would need to correspond to the orientation toward the multiple avatars. Finding a converging point of the orientation of each of the avatars would help determine an optimal orientation of the virtual camera toward the multiple avatars for taking a picture that generally faces at where the avatars are facing. It also would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Shirai to incorporate the teachings of Torama to identify a connection line connecting a center of a group to the intersection because connecting the intersection with a group center forms a line that can represent the line-of-sight of the whole group, which is a more accurate representation than choosing the line-of-sight of a single entity to represent the whole group. Then, the overall line-of-sight can be used as an orientation of the virtual camera toward multiple avatars when taking a picture of the avatars. While Torama is directed to the lines-of-sight of individual eyes, the process can still be applied to other entities, such as the avatars in Shirai. Additionally, a person of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that the orientation of Shirai could have been substituted for the line-of-sight direction vector (connection line) of Torama because both the orientation and line-of-sight direction vector serve the purpose of representing the direction an object is facing . Furthermore, a person of ordinary skill in the art would have been able to carry out the substitution. Finally, the substitution achieves the predictable result of defining a camera line based on the direction an object is facing . Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to substitute the orientation of Shirai for the line-of-sight direction vector of Torama according to known methods to yield the predictable result of defining a camera line based on the direction an object is facing. By substituting Shirai’s orientation for the line-of-sight direction vector of Torama , Shirai modified by Torama teaches defining a camera line based on the one or more connection lines . The line-of-sight direction vector of Torama is equivalent to a connection line. Regarding claim 10, Shirai in view of Torama teaches the non-transitory storage medium of claim 9 . Shirai further teaches wherein defining the group comprises identifying the multiple entities from a cluster of entities based on a size of the group and distances among the entities in the cluster ( Paragraph 0096-0097, 0101, 0110 – “a predetermined event occurs when at least one of the following meets a first predetermined condition: ( i ) the respective positions of the first user avatar M1 and the second user avatar M1 in the virtual space, and (ii) a first relative positional relationship between the first user avatar M1 and the second user avatar M1 in the virtual space… the first predetermined condition is satisfied when the first user avatar M1 and the second user avatar M1 are in close proximity to each other… the predetermined event includes an image acquisition event such as commemorative picture taking…when a commemorative picture is taken for an image acquisition event, other user avatars M1 located at the same location in the past may appear according to a selection (request) by the user that causes the image acquisition event to occur. Thereby, a group picture-like image containing many user avatars at the image acquisition event can be obtained…the user may select an avatar with whom to take the image (e.g., from among multiple nearby avatars). Also, if no other avatars are nearby, an image may be taken of the first user avatar M1 alone”; Note: avatars are identified for taking a photo based on distance between the avatars and the number of nearby avatars (size of group) . All the avatars in the area are equivalent to a cluster). Regarding claim 11, Shirai in view of Torama teaches the non-transitory storage medium of claim 9 . Shirai does not teach wherein identifying the one or more intersections comprises identifying the one or more intersections made by the multiple extension lines. However, Torama teaches identifying the one or more intersections made by the multiple extension lines ( Paragraph 0035 – “the eye tracking device 1 according to the present disclosed technology may calculate an intersection between a line of sight of a right eye and a line of sight of a left eye”; Note: an intersection made by lines-of-sight are identified. The lines-of-sight are equivalent to the extension lines ). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Shirai to incorporate the teachings of Torama to identify intersections made by the extension lines for the benefit of helping find an overall line-of-sight for multiple lines-of-sight, which would assist in situations, like in Shirai, when there are multiple avatars and lines-of-sight while capturing a picture. Specifically, in Shirai, “ the line-of-sight direction of the virtual camera 60 corresponds to the orientation toward the first user avatar M1 ” (Paragraph 0171), but if there are multiple avatars, the line-of-sight of the virtual camera would need to correspond to the orientation toward the multiple avatars. Finding a converging point of the orientation line of each of the avatars would help determine an optimal orientation of the virtual camera toward the multiple avatars for taking a picture that generally faces at where the avatars are facing. Regarding claim 13, Shirai in view of Torama teaches the non-transitory storage medium of claim 9 . Shirai further teaches if there is only one orientation , defining the camera line as along the orientation ( Fig. 16, Paragraph 0171 – “the first user avatar M1 is positioned in front of the virtual camera 60. At this time, the line-of-sight direction of the virtual camera 60 corresponds to the orientation toward the first user avatar M1”; Note: a line-of-sight of the virtual camera is a camera line; see screenshot of Fig. 16 above , which shows the camera location and camera line . The camera line is defined along the orientation toward the avatar). Shirai does not teach the connection line in the limitation: “ if there is only one connection line, defining the camera line as along the connection line; or if there are two connection lines, defining the camera line as along a line that bisects an ordinary angle formed by the two connection lines; or if there are more than two connection lines, defining the camera line as along a line that bisects an ordinary angle formed by two connection lines that pass through first two intersections made by majority of the multiple extension lines ” . However, Torama teaches that there is only one connection line ( Paragraph 0035 – “the eye tracking device 1 according to the present disclosed technology may calculate an intersection between a line of sight of a right eye and a line of sight of a left eye, and output, as a line-of-sight direction vector, a vector starting from the midpoint between the eyeball center position of the right eye and the eyeball center position of the left eye and ending at the intersection”; Note; the vector starting from the midpoint to the intersection is equivalent to the connection line. There is only one connection line ). A person of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that the orientation of Shirai could have been substituted for the line-of-sight direction vector (connection line) of Torama because both the orientation and line-of-sight direction vector serve the purpose of representing the direction an object is facing . Furthermore, a person of ordinary skill in the art would have been able to carry out the substitution. Finally, the substitution achieves the predictable result of defining a camera line based on the direction an object is facing . Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to substitute the orientation of Shirai for the line-of-sight direction vector of Torama according to known methods to yield the predictable result of defining a camera line based on the direction an object is facing. By substituting Shirai’s orientation for the line-of-sight direction vector of Torama , Shirai modified by Torama teaches defining a camera line as along the connection line . The line-of-sight direction vector of Torama is equivalent to a connection line. Regarding claim 14, Shirai in view of Torama teaches the non-transitory storage medium of claim 9 . Shirai further teaches if there is only one orientation , defining the camera line as along the orientation ( Fig. 16, Paragraph 0171 – “the first user avatar M1 is positioned in front of the virtual camera 60. At this time, the line-of-sight direction of the virtual camera 60 corresponds to the orientation toward the first user avatar M1”; Note: a line-of-sight of the virtual camera is a camera line; see screenshot of Fig. 16 above , which shows the camera location and camera line . The camera line is defined along the orientation toward the avatar). Shirai does not teach the connection line in the limitation: “ if there is only one connection line, defining the camera line as along the connection line; or if there are two connection lines, defining the camera line as along a line that bisects an ordinary angle formed by the two connection lines; or if there are more than two connection lines and there is a reflex angle formed by two adjacent connection lines, defining the camera line as along a line that bisects an explementary angle of the reflex angle formed by the two adjacent connection lines ” . However, Torama teaches that there is only one connection line ( Paragraph 0035 – “the eye tracking device 1 according to the present disclosed technology may calculate an intersection between a line of sight of a right eye and a line of sight of a left eye, and output, as a line-of-sight direction vector, a vector starting from the midpoint between the eyeball center position of the right eye and the eyeball center position of the left eye and ending at the intersection”; Note; the vector starting from the midpoint to the intersection is equivalent to the connection line. There is only one connection line ). A person of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that the orientation of Shirai could have been substituted for the line-of-sight direction vector (connection line) of Torama because both the orientation and line-of-sight direction vector serve the purpose of representing the direction an object is facing . Furthermore, a person of ordinary skill in the art would have been able to carry out the substitution. Finally, the substitution achieves the predictable result of defining a camera line based on the direction an object is facing . Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to substitute the orientation of Shirai for the line-of-sight direction vector of Torama according to known methods to yield the predictable result of defining a camera line based on the direction an object is facing. By substituting Shirai’s orientation for the line-of-sight direction vector of Torama , Shirai modified by Torama teaches defining a camera line as along the connection line . The line-of-sight direction vector of Torama is equivalent to a connection line. Regarding claim 16, Shirai teaches a computing environment ( Paragraph 0045, 0051 – “The virtual reality generation system 1 includes a server device 10 and one or more terminal devices 20…The configuration of the server device 10 will be described in detail. The server device 10 is constituted by a server computer. The server device 10 may be realized by a plurality of server computers working together”; Note: the virtual reality generation system is a computing environment ) comprising: a processor set ( Paragraph 0057, 0063 – “The server controller 13 may include a CPU (Central Processing Unit) that performs specific functions by loading a dedicated microprocessor or a specific program, a GPU (Graphics Processing Unit), and the like… The terminal controller 25 includes one or more processors”; Note: the server and terminal controller includes processors ) ; a communication fabric ( Paragraph 0053, 0059 – “The server communicator 11 includes an interface that communicates with an external device wirelessly or by wire to send and receive information. The server communicator 11 may include, for example, a wireless LAN (Local Area Network) communication module or a wired LAN communication module or the like. The server communicator 11 can send and receive information to and from the terminal devices 20 via the network 3… The terminal communicator 21 communicates with an external device wirelessly or by wire, and includes an interface for sending and receiving information”; Note: the server and terminal communicators are communication fabrics ) ; at least one volatile memory ( Paragraph 0224 – “temporary storage of various data required in the terminal device 20A can be realized by RAM (Random Access Memory) 221A of the terminal memory 22A”; Note: RAM is volatile memory ) ; a persistent storage ( Paragraph 0060 – “the terminal memory 22 may include a semiconductor memory, a magnetic memory, or optical memory”; Note: the terminal memory includes persistent storage, such as magnetic memory ) ; and a set of peripheral devices ( Paragraph 0058, 0061-0062 – “the terminal device 20 is provided with a terminal communicator 21, a terminal memory 22, a display portion 23, an input portion 24, and a terminal controller 25… The display portion 23 includes a display device…the input portion 24 may include physical keys or may further include any input interface, including a pointing device such as a mouse or the like. The input portion 24 may also be able to accept non-contact type user input, such as voice input and gesture input. For gesture input, a sensor (image sensor, acceleration sensor, distance sensor, or the like) may be used to detect the user's body movements”; Note: the display portion and input portion of the terminal device includes various peripheral devices, such as a display, a mouse, and sensors ) , wherein the persistent storage further includes an operating system and stores thereupon a metaverse application program, the metaverse application program, when being executed by the computing environment, causes the computing environment to perform ( Paragraph 0057, 0060, 0076, 0078 – “the server controller 13 cooperates with the terminal device 20 to execute a virtual reality application in response to user operations with respect to a display portion 23 of the terminal device 20…the terminal memory 22 may include a semiconductor memory, a magnetic memory, or optical memory, or the like. The terminal memory 22 stores various information and programs used in the processing of virtual reality that are received from the server device 10…an image of a three-dimensional space of the Metaverse may be used… FIG. 6 is an explanatory diagram of an example of a terminal image G600 for one user (hereinafter referred to as a “first user”) when four users share the same virtual space”; Note: the virtual reality program is a metaverse application, as it allows for multiple users to interact in a virtual space . Furthermore, it would be obvious to one of ordinary skill in the art that the memory stores an operating system software since an operating system is required for a computing device, such as the terminal device, to run properly ) : defining a group that includes multiple entities ( Paragraph 0109-0110 – “when unknown (non-friend) avatars exist in the virtual space, the unknown avatars will not appear in the image…when a commemorative picture is taken for an image acquisition event, other user avatars M1 located at the same location in the past may appear according to a selection (request) by the user that causes the image acquisition event to occur. Thereby, a group picture-like image containing many user avatars at the image acquisition event can be obtained…the user may