DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-20 are present for examination.
Claim Objections
Claim 17 is objected to because of the following informalities: “the real-time object” should be “the real-time image”. Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim(s) 5, 6, 11, 12, and 17 is/are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claim 5, it recites “the real-time image currently displayed” and “the stored real-time image”. However, claims 1 and 4, from which claim 5 depends, have never recited displaying different the real-time images. It is not clear whether the “the real-time image currently displayed” and “the stored real-time” are the same or different. In addition, it is not clear what “the real-time image of the searched duration” means because no where in claim 5 or claims 1 and 4 disclose the real-time image has a searched duration.
Claim 6 depends from claim 5 but fail to cure the deficiencies of claim 5.
In addition, it seems claim 6 is for “based on the selected object being included within the real-time image currently displayed”, mutually exclusive to the condition recited in claim 5, “based on the selected object being not within the real-time image currently displayed”. Therefore, the search for a duration performed in claim 5 should have not been performed in claim 6, and the term “the searched duration” recited claim 6 is indefinite.
For examination purposes, “the real-time image currently displayed” and “the stored real-time image” have been interpreted as different real-time images. “the real-time image of the searched duration” has been interpreted as “the stored real-time image”, and “the searched duration” has been interpreted as any duration.
Regarding claim 11, it recites “The user terminal of claim 1, wherein, based on the voice of the user being sensed while an execution screen of another application being executed in a foreground is displayed on the display unit, the controller performs a control operation to simultaneously display a screen of the selected object information and the execution screen of the other application by splitting a screen of the display unit.” However, claim 1 recites “display a real-time image sensed through the camera on the display unit”. It is not clear how the “an execution screen of another application being executed in a foreground is displayed on the display unit” when the “a real-time image sensed through the camera” is displayed on the display unit.
Claim 12 depends from claim 11 but fail to cure the deficiencies of claim 11.
Regarding claim 17, it recites “the selected object being located within a predetermined distance”. It is not clear what is the predetermined distance because a distance is between two points, while the predetermined distance only refers to one point (the location of the selected object) without pointing to the other point. Therefore, it is not clear “within a predetermined distance” is within a predetermined distance from where.
For examination purposes, the predetermine distance is defined as any distance value.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1, 2, 3, 7, 17, 18, and 20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Korean Patent Publication No. KR 20200097523 A to Lee.
Regarding claim 1, Lee discloses A user terminal (Lee, Translation, para. [0026]), comprising:
a display unit (Lee, Translation, para. [0032], the first touch panel displays a screen provided by a travel destination information plug-in program);
a microphone (Lee, Translation, para. [0010], a microphone for receiving a user’s voice);
a location information module (Lee, Translation, para. [0032], the first sensing unit senses the current location of the wearable terminal using a first position sensor, para. [0073], disclosing the second sensing unit senses the current location of the terminal and the direction in which the camera is facing the subject);
a camera (Lee, Translation, para. [0012], a camera, para. [0062], a camera); and
a controller (Lee, Translation, para. [0054], first control unit, para. [0062], a second control unit) configured to
display a real-time image sensed through the camera on the display unit (Lee, Translation, para. [0072], disclosing the user drives the camera and the building that the user wants to search for is recognized as a subject, FIG. 8, 810 showing a real-time image sensed through the camera on the display unit, FIG. 10, 1010 showing a real0time image sensed through the camera on the display unit),
analyze, in real time, a plurality of objects included within the real-time image through analysis of the real-time image and select an object corresponding to voice of a user by analyzing, through artificial intelligence, at least one of a current location and a direction sensed through the location information module and the voice of the user input through the microphone (Lee, Translation, para. [0033], disclosing receiving the user voice corresponding to a mode selected by the user between a building search mode and a surrounding search mode, para. [0039], disclosing the user can command a “building search” by saying “building search” by voice, para. [0046], disclosing the user commands “nearby search”, para. [0064], disclosing a travel destination information plug-in program that supports a building search mode and a surrounding search mode while the camera is in operation, para. [0068], disclosing the surrounding search mode enables immediate and visual recognition of the location, direction, distance information etc., of attractions located near the user, para. [0078], disclosing AR surrounding search mode, para. [0080], disclosing transmitting the sensed current location information, the entered search term, and the radius information to the second server, and receives information about surrounding objects corresponding to the location information, search term, and radius from the second server, para. [0081], disclosing processing information about the received surrounding objects as shown in the screen 1010 of Fig. 10 and displays it on the second UI unit, indicating the travel destination information plug-in program and/or the second server can correspond to “artificial intelligence”, the user commanding “building search” or “nearby search” can correspond to “the voice of the user input through the microphone”, the sensed current location information can correspond to “at least one of a current location and a direction sensed through the location information module” and the nearby search or surrounding search that enables immediate and visual recognition of the location, direction, distance information etc., of attractions located near the user can correspond to analyze, in real time, a plurality of objects included within the real-time image (the attractions near the user) through analysis of the real-time image and select an object corresponding to voice of a user (the attractions satisfying the user’s search terms) by analyzing, through artificial intelligence (the travel destination information plug-in program and/or the second server), at least one of a current location and a direction sensed through the location information module and the voice of the user input through the microphone, as shown in Fig. 10),
display the real-time image and selected object information of the selected object together while highlighting the selected object within the real-time image, based on the selected object being included within the real-time image currently displayed on the display unit (Lee, Translation, para. [0083], disclosing generating and displaying a screen overlaid with the surrounding image shown through the camera showing the location of the received surrounding objects, with 6 surrounding objects found, and 2 surrounding objects located on the screen currently being viewed by the camera, a balloon-shaped pin pointing to surrounding objects located on the currently displayed screen, Fig. 10, showing the balloon-shaped pin as highlighting the selected object within the real-time image, the balloon-shaped pin can correspond to the selected object information being displayed with real-time image while highlighting the selected object within the real-time image, based on the selected object corresponding to the balloon-shaped in being included within the real-time image currently displayed on the display unit), and
display the real-time image and the selected object information together without highlighting the selected object, based on the selected object being not included within the real-time image currently displayed on the display unit (Lee, Translation, para. [0083], disclosing generating and displaying a screen overlaid with the surrounding image shown through the camera showing the location of the received surrounding objects, with 6 surrounding objects found, and 2 surrounding objects located on the screen currently being viewed by the camera, which can be seen by a radar icon displaying the locations of searched surrounding objects as dots, Fig. 10, showing the balloon-shaped pin as highlighting the selected object within the real-time image, and a small radar icon with dots displayed on it showing the locations of the searched surrounding objects, indicating it can display the real-time image and the selected object information (the surrounding objects shown as dots on the radar icon that are not located in the displayed image) together without highlighting the selected object, based on the selected object being not included within the real-time image currently displayed on the display unit).
PNG
media_image1.png
356
438
media_image1.png
Greyscale
Regarding claim 2, Lee discloses the user terminal of claim 1, wherein, based on the selected object being included within the real-time image currently displayed, the controller performs a control operation to vary a location at which the selected object information is displayed on the display unit depending on a location of the selected object within the real-time image (Lee, Fig. 10, showing the balloon-shaped pins being displayed at different locations depending on the locations of the selected object within the real-time image, the balloon-shaped pins can correspond to selected object information showing the locations of the selected object highlighted by the balloon-shaped pin).
Regarding claim 3, Lee discloses the user terminal of claim 1, wherein, based on the selected object being not included within the real-time image currently displayed, the controller performs a control operation to display the selected object information at a preset location of the display unit (Lee, Fig. 10, the radar icon showing the selected object information as dots at a preset location of the display unit based on the selected objects not included within the real-time image currently displayed).
Regarding claim 7, Lee discloses the user terminal of claim 1, further comprising a wireless communication unit (Lee, Translation, para. [0030], disclosing the first communication unit is responsible for communication between the wearable terminal and the first server, para. [0080], disclosing the second communication unit transmits information to the second server and receives information from the second server), wherein the controller performs a control operation to transmit identification information of the selected object to an external platform through the wireless communication unit, receive at least one of text, sound, or image information related to the identification information through the wireless communication unit from the external platform, and include the at least one of the text, the sound, or the image information in the selected object information (Lee, Translation, para. [0016], disclosing the first and second travel destination information providing servers, para. [0085], disclosing when the user selects a point among the radar icons on the screen or selects a pin, the second travel destination information provider can display detailed information of the target object corresponding to the selected point as shown in the screen of FIG. 11, para. [0087], when the user selects the AR walking navigation menu to move to Cheonggyecheon, the second travel destination information provider generates and displays a route guidance screen to go from the current location to the location corresponding to the screen, the navigation screen is provided in collaboration with the second server, para. [0094], disclosing the second server provides search result information to the second travel destination information provider, indicating the user device can correspond to a smart phone as shown in Fig. 11, which can communicate with the servers through wireless communications provided by the wireless communication unit on the smartphone, the user selecting navigation to move to Cheongguyecheon on the screen will transmit identification information of the selected object such as Cheongguyecheon to an external platform such as the second server through the wireless communication unit, the server providing information can correspond to the user device receiving at least one of text, sound, or image information related to the identification information through the wireless communication unit from the external platform (second server), and include the at least one of the text, the sound, or the image information in the selected object information, as shown in Fig. 11).
Regarding claim 17, Lee discloses the user terminal of claim 1, wherein the controller performs a control operation to display the selected object information, based on the selected object being included within the real-time object and the selected object being located within a predetermined distance (Lee, Fig. 10, showing the balloon-shape pins as the selected object information based on the selected object being located within the real-time image displayed on the screen and the selected object being located within the displayed image region defining a predetermined distance from the center of the screen as within a predetermined distance).
Regarding claim 18, Lee discloses the user terminal of claim 1, wherein the controller performs a control operation to display a plurality of object information for the plurality of objects together with a plurality of object type icons (Lee, Fig. 10, showing the balloon-shape pins and radar icons including dots representing the selected objects, Fig. 13 showing different object information with different colored icons, Fig. 14, 1420 showing object information with different colored icons, and list of icons having different colors and shapes, para. [0095], disclosing displaying pins with different colors mapped to each representative building according to the attributes of each representative building, indicating the displayed object information can correspond to a plurality of object information such as attributes for the plurality of objects together with a plurality of object type icons explaining the different color and shape).
PNG
media_image2.png
494
242
media_image2.png
Greyscale
PNG
media_image3.png
494
508
media_image3.png
Greyscale
Regarding claim 20, Lee discloses A method of controlling a user terminal (Lee, Translation, para. [0005], para. [0026]), the method comprising:
displaying a real-time image sensed through a camera on a display unit (Lee, Translation, para. [0072], disclosing the user drives the camera and the building that the user wants to search for is recognized as a subject, FIG. 8, 810 showing a real-time image sensed through the camera on the display unit, FIG. 10, 1010 showing a real0time image sensed through the camera on the display unit);
analyzing, in real time, a plurality of objects included within the real-time image through analysis of the real-time image and selecting an object corresponding to voice of a user by analyzing, through artificial intelligence, at least one of a current location and a direction sensed through a location information module and the voice of the user input through a microphone (Lee, Translation, para. [0033], disclosing receiving the user voice corresponding to a mode selected by the user between a building search mode and a surrounding search mode, para. [0039], disclosing the user can command a “building search” by saying “building search” by voice, para. [0046], disclosing the user commands “nearby search”, para. [0064], disclosing a travel destination information plug-in program that supports a building search mode and a surrounding search mode while the camera is in operation, para. [0068], disclosing the surrounding search mode enables immediate and visual recognition of the location, direction, distance information etc., of attractions located near the user, para. [0078], disclosing AR surrounding search mode, para. [0080], disclosing transmitting the sensed current location information, the entered search term, and the radius information to the second server, and receives information about surrounding objects corresponding to the location information, search term, and radius from the second server, para. [0081], disclosing processing information about the received surrounding objects as shown in the screen 1010 of Fig. 10 and displays it on the second UI unit, indicating the travel destination information plug-in program and/or the second server can correspond to “artificial intelligence”, the user commanding “building search” or “nearby search” can correspond to “the voice of the user input through the microphone”, the sensed current location information can correspond to “at least one of a current location and a direction sensed through the location information module” and the nearby search or surrounding search that enables immediate and visual recognition of the location, direction, distance information etc., of attractions located near the user can correspond to analyze, in real time, a plurality of objects included within the real-time image (the attractions near the user) through analysis of the real-time image and select an object corresponding to voice of a user (the attractions satisfying the user’s search terms) by analyzing, through artificial intelligence (the travel destination information plug-in program and/or the second server), at least one of a current location and a direction sensed through the location information module and the voice of the user input through the microphone, as shown in Fig. 10); and
displaying the real-time image and selected object information of the selected object together while highlighting the selected object within the real-time image, based on the selected object being included within the real-time image currently displayed on the display unit (Lee, Translation, para. [0083], disclosing generating and displaying a screen overlaid with the surrounding image shown through the camera showing the location of the received surrounding objects, with 6 surrounding objects found, and 2 surrounding objects located on the screen currently being viewed by the camera, a balloon-shaped pin pointing to surrounding objects located on the currently displayed screen, Fig. 10, showing the balloon-shaped pin as highlighting the selected object within the real-time image, the balloon-shaped pin can correspond to the selected object information being displayed with real-time image while highlighting the selected object within the real-time image, based on the selected object corresponding to the balloon-shaped in being included within the real-time image currently displayed on the display unit).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lee in view of US Patent Publication No. US 20190278797 A1 to Joshi.
Regarding claim 4, Lee discloses the user terminal of claim 1, further comprising a memory (Lee, Translation, para. [0020], disclosing element, device, or system includes memory, para. [0055], disclosing a memory), wherein the controller performs a control operation to identify at least one object among the plurality of objects within the real-time image (Lee, Translation, para. [0068], disclosing visual recognition of the location, direction, distance information of attractions located near the user, para. [0072], disclosing after the user drives the camera, the building (e.g., a tourist attraction) is recognized as a subject)). However, Lee does not expressly disclose tag the identified object with identification information, and store, in the memory, the real-time image including the identification information with which the at least one object is tagged.
On the other hand, Joshi discloses the controller performs a control operation to identify at least one object among the plurality of objects within the image (Joshi, para. [0032], disclosing obtaining a photographic image, para. [0034], disclosing analyzing the photographic image to identify at least one object in the photographic image and generating at least one tag respectively for each of the at least one object), tag the identified object with identification information (Joshi, para. [0041], disclosing associating the one or more tags with the photographic image stored in local, para. [0042], disclosing one or more tags can be created for the photographic image), and store, in the memory, the image including the identification information with which the at least one object is tagged (Joshi, para. [0041], disclosing associating the one or more tags with the photographic image stored in local, para. [0088], disclosing associating the tags and pixel locations thereof with the photo identifier and the user identifier, and saving the photo). Because Lee discloses obtaining real-time images, combining Lee and Joshi would identify at least one object among the plurality of objects within the real-time image, tag the identified object with identification information, and store, in the memory, the real-time image including the identification information with which the at least one object is tagged.
Before the invention was effectively filed, it would have been obvious for a person skilled in the art to combine Lee and Joshi. The suggestion/motivation would have been to allow users to find desired images easily to improve processing efficiency, as suggested by Joshi (see Joshi, para. [0042]).
Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lee in view of US Patent Publication No. 20130265261A1 to Min.
Regarding claim 8, Lee discloses the user terminal of claim 1. However, Lee does not expressly disclose wherein the controller performs a control operation to display selected object information of different types according to a movement speed of the user.
On the other hand, Min discloses wherein the controller performs a control operation to display information of different types according to a movement speed of the user (Min, para. [0074], disclosing detecting information of the moving state of a moving object such as moving speed, para. [0168], disclosing the moving object in which the user terminal is placed moves at a speed of 80 km/h, or below, the controller can control the display unit to display a list type UI, and if the moving object in which the user terminal device is placed moves at a speed above 80 km/h, the controller can control the display unit to display a tile type UI, the list type UI may be menus that are listed in a row, and the tile type UI may be menus that are listed in plural rows, the moving object’s speed can correspond to a movement of the user because the user terminal device is placed on the moving object). Because Lee discloses display selected object information, combining Lee and Min could allow the controller to perform a control operation to display selected object information of different types according to a movement speed of the user.
Before the invention was effectively filed, it would have been obvious for a person skilled in the art to combine Lee and Min. The suggestion/motivation would have been to improve the user convenience, as suggested by Min (see Min, para. [0196]).
Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lee in view of Chinese Patent Publication No. CN 111868824 A to Hujber et al.
Regarding claim 13, Lee discloses the user terminal of claim 1, further comprising a voice output unit (Lee, Translation, para. [0010], disclosing a speaker for outputting information about the target object in the form of sound), wherein, the controller performs a control operation to output the selected object information with audio through the voice output unit (Lee, Translation, para. [[34], disclosing the speaker outputs the result such as information corresponding to a building or surrounding search information) in the form of sound).
However, Lee does not expressly disclose wherein, based on a power saving mode of the display unit at a time point at which the voice of the user is sensed, the controller performs a control operation to output the selected object information only with audio through the voice output unit.
On the other hand, Hujber discloses further comprising a voice output unit (Hujber, Translation, para. [0009], disclosing a smart voice assistant device includes at least one speaker), wherein, based on a power saving mode of the display unit at a time point at which the voice of the user is sensed, the controller performs a control operation to output the information only with audio through the voice output unit (Hujber, Translation, para. [0012], disclosing detecting user voice interaction, and modulating the output volume accordingly, para. [0055], disclosing the display device can be turned off (e.g., to save power), and other output modes can be sued for user output (e.g., to deliver audio information and prompts through the device’s speakers), indicating when the display device is turned off to save power (power saving mode) at a time point at which the voice of the user is detected (sensed), the output of the information will only be with audio through the speaker as the voice output unit). Because Lee discloses to output the selected object information with audio through the voice output unit, combining Lee and Hujber could allow based on a power saving mode of the display unit at a time point at which the voice of the user is sensed, the controller performs a control operation to output the selected object information only with audio through the voice output unit.
Before the invention was effectively filed, it would have been obvious for a person skilled in the art to combine Lee and Hujber. The suggestion/motivation would have been to provide improved techniques for operating, configuring, and optimizing the performance of voice interaction devices, as suggested by Hujber (see Hujber, Translation, para. [0023]).
Regarding claim 14, Lee in view of Hujber discloses the user terminal of claim 13, wherein, based on the display unit switching to a non-power saving mode, the controller performs a control operation to display the real-time image and the selected object information in full screen (Lee, Figs. 10 and 11, showing the real-time image and the selected object information are displayed in full screen, Hujber, Translation, para. [0023], disclosing the voice interaction device includes multiple input and output modalities, which may include audio input, audio output, image sensor, display and/or touch screen display, para. [0055], disclosing the display device can be turned off (e.g., to save power), and other output modes can be sued for user output (e.g., to deliver audio information and prompts through the device’s speakers), indicating when turning on the display device will switch the display unit to a non-power saving mode, and the controller will perform a control operation to display the real-time image and the selected object information in full screen).
Claim(s) 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lee in view of Hujber as applied to claim 14 above, and further in view of Chinese Patent Publication No. CN 106390465 A to Xu.
Regarding claim 15, Lee in view of Hujber discloses the user terminal of claim 14. However, Lee or Hujber does not expressly disclose wherein the controller performs a control operation to output audio of different types according to a movement speed of the user.
On the other hand, Xu discloses the controller performs a control operation to output audio of different types according to a movement speed of the user (Xu, Translation, para. [0036], disclosing sensing changes in the user’s current speed, and output standard audio of different types and/or different volumes according to the changes).
Before the invention was effectively filed, it would have been obvious for a person skilled in the art to combine Lee in view of Hujber with Xu. The suggestion/motivation would have been to provide users with a better experience by increasing audio diversity, as suggested by Xu (see Xu, Translation, para. [0036]).
Claim(s) 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lee.
Regarding claim 19, Lee discloses the user terminal of claim 18, wherein, based on selection of at least one of the plurality of object type by the user, the controller performs a control operation to display only information of at least one object related to the selected at least one object type icon among the plurality of objects (Lee, Fig. 14, 1410, showing only one type of icon was displayed, 1420, showing different types of icons corresponding to different object, Translation, para. [0092], disclosing a screen with a search input window and a screen showing the type of pin in surrounding search mode, para. [0093], disclosing the screen 1410 displays information about nearby convenience stores when the user enters “convenience store” as a search term, para. [0094], disclosing extracting and searching for representative buildings located around the user, classifying information about the searched representative buildings by attributes of each representative building, and providing it to the second travel destination information provider, para. [0095], disclosing the second travel destination information provider generates screen 1420 showing the searched representative buildings and displays pins with different colors mapped to each representative building according to the attributes of each representative building, indicating based on selection of at least one of the plurality of object type by the user (convenient store as the selection of at least one of the plurality of object type by the user), the controller performs a control operation to display only information of at least one object related to the selected at least one object type icon among the plurality of objects (displaying the convenience store icons on the screen)).
Although Lee does not expressly disclose selection of at least one of the plurality of object type icons by the user, Lee discloses the user can input search terms of one object type, which will obviously have the corresponding object type icons selected. Before the invention was effectively filed, it would have been obvious for a person skilled in the art to modify Lee to allow the selection of at least one of the plurality of object type icons by the user, and based on the selection, the controller performs a control operation to display only information of at least one object related to the selected at least one object type icon among the plurality of objects (displaying only convenient stores in 1410 as the selected at least one object type icon) The suggestion/motivation would have been to enable users to obtain travel destination information more conveniently, as suggested by Lee (see Lee, para. [0005]).
Allowable Subject Matter
Claims 5, 6, 11, and 12 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims.
Claims 9, 10, and 16 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Regarding claim 5, none of the prior art references on the record discloses wherein, based on the selected object being not within the real-time image currently displayed, the controller performs a control operation to search for a duration in which the identification information related to the selected object is tagged in the stored real-time image and play a clip of the real-time image of the searched duration together with the selected object information.
Claim 6 depends from claim 5 with additional limitations.
Regarding claim 9, none of the prior art references on the record discloses wherein the controller performs a control operation to display the selected object information of a detailed version based on the movement speed being equal to or greater than a certain speed, and display the selected object information of a simplified version based on the movement speed being less than the certain speed.
Claim 10 depends from claim 9 with additional limitations.
Regarding claim 11, none of the prior art references on the record discloses wherein, based on the voice of the user being sensed while an execution screen of another application being executed in a foreground is displayed on the display unit, the controller performs a control operation to simultaneously display a screen of the selected object information and the execution screen of the other application by splitting a screen of the display unit.
Claim 12 depends from claim 11 with additional limitations.
Regarding claim 16, none of the prior art references on the record discloses wherein the controller performs a control operation to output audio of the selected object information of a simplified version based on the movement speed being equal to or greater than a certain speed, and output audio of the selected object information of a detailed version based on the movement speed being less than the certain speed.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HAIXIA DU whose telephone number is (571)270-5646. The examiner can normally be reached Monday - Friday 8:00 am-4:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at 571-272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HAIXIA DU/Primary Examiner, Art Unit 2611