我通过在该文件中找到packagename\build\native\packagename.targets并复制所有v110节来编辑包(在解决方案目录内的packages文件夹中)。我将**条件字段中的v110更改为v120**非常小心地将文件名路径全部保留为v110。这只是允许Visual Studio 2013链接到2012年的库,在这种情况下,它起作用了。
When linking against shared libraries, make sure that the used symbols are not hidden.
The default behavior of gcc is that all symbols are visible. However, when the translation units are built with option -fvisibility=hidden, only functions/symbols marked with __attribute__ ((visibility ("default"))) are external in the resulting shared object.
You can check whether the symbols your are looking for are external by invoking:
# -D shows (global) dynamic symbols that can be used from the outside of XXX.so
nm -D XXX.so | grep MY_SYMBOL
the hidden/local symbols are shown by nm with lowercase symbol type, for example t instead of `T for code-section:
nm XXX.so
00000000000005a7 t HIDDEN_SYMBOL
00000000000005f8 T VISIBLE_SYMBOL
You can also use nm with the option -C to demangle the names (if C++ was used).
Similar to Windows-dlls, one would mark public functions with a define, for example DLL_PUBLIC defined as:
When a translation unit is compiled with -fvisibility=hidden the resulting symbols have still external linkage (shown with upper case symbol type by nm) and can be used for external linkage without problem if the object files become part of a static libraries. The linkage becomes local only when the object files are linked into a shared library.
To find which symbols in an object file are hidden run:
>>> objdump -t XXXX.o | grep hidden
0000000000000000 g F .text 000000000000000b .hidden HIDDEN_SYMBOL1
000000000000000b g F .text 000000000000000b .hidden HIDDEN_SYMBOL2
Even though this is a pretty old questions with multiple accepted answers, I'd like to share how to resolve anobscure"undefined reference to" error.
Different versions of libraries
I was using an alias to refer to std::filesystem::path: filesystem is in the standard library since C17 but my program needed to**also compile in C14**so I decided to use a variable alias:
# if (defined _GLIBCXX_EXPERIMENTAL_FILESYSTEM) //is the included filesystem library experimental? (C++14 and newer: <experimental/filesystem>)
using path_t = std::experimental::filesystem::path;
# elif (defined _GLIBCXX_FILESYSTEM) //not experimental (C++17 and newer: <filesystem>)
using path_t = std::filesystem::path;
# endif
Let's say I have three files: main.cpp, file.h, file.cpp:
*file.h#include's <experimental::filesystem> and contains the code above *file.cpp, the implementation of file.h, #include's "file.h" *main.cpp#include's <filesystem> and "file.h"
Note thedifferent librariesused in main.cpp and file.h. Since main.cpp #include'd "file.h" after <filesystem>, the version of filesystem used there wasthe C++17 one. I used to compile the program with the following commands:
$ g++ -g -std=c++17 -c main.cpp -> compiles main.cpp to main.o $ g++ -g -std=c++17 -c file.cpp -> compiles file.cpp and file.h to file.o $ g++ -g -std=c++17 -o executable main.o file.o -lstdc++fs -> links main.o and file.o
This wayany functioncontained in file.o and used in main.o thatrequired path_tgave "undefined reference" errors becausemain.oreferred to**std::filesystem::pathbutfile.otostd::experimental::filesystem::path**.
Missing "extern" in const variable declarations/definitions (C++ only)
For people coming from C it might be a surprise that in C++ global constvariables have internal (or static) linkage. In C this was not the case, as all global variables are implicitly extern (i.e. when the static keyword is missing).
Example:
// file1.cpp
const int test = 5; // in C++ same as "static const int test = 5"
int test2 = 5;
// file2.cpp
extern const int test;
extern int test2;
void foo()
{
int x = test; // linker error in C++ , no error in C
int y = test2; // no problem
}
correct would be to use a header file and include it in file2.cpp and file1.cpp
extern const int test;
extern int test2;
Alternatively one could declare the const variable in file1.cpp with explicit extern
A "clean" of the build can remove the "dead wood" that may be left lying around from previous builds, failed builds, incomplete builds and other build system related build issues.
In general the IDE or build will include some form of "clean" function, but this may not be correctly configured (e.g. in a manual makefile) or may fail (e.g. the intermediate or resultant binaries are read-only).
Once the "clean" has completed, verify that the "clean" has succeeded and all the generated intermediate file (e.g. an automated makefile) have been successfully removed.
This process can be seen as a final resort, but is often a good first step; especially if the code related to the error has recently been added (either locally or from the source repository).
A Windows UNICODE build is built with TCHAR etc. being defined as wchar_t etc. When not building with UNICODE defined as build with TCHAR defined as char etc. These UNICODE and _UNICODE defines affect all the "T" string types; LPTSTR, LPCTSTR and their elk.
Building one library with UNICODE defined and attempting to link it in a project where UNICODE is not defined will result in linker errors since there will be a mismatch in the definition of TCHAR; char vs. wchar_t.
The error usually includes a function a value with a char or wchar_t derived type, these could include std::basic_string<> etc. as well. When browsing through the affected function in the code, there will often be a reference to TCHAR or std::basic_string<TCHAR> etc. This is a tell-tale sign that the code was originally intended for both a UNICODE and a Multi-Byte Character (or "narrow") build.
To correct this, build all the required libraries and projects with a consistent definition of UNICODE (and _UNICODE).
This can be done with either;
# define UNICODE
# define _UNICODE
Or in the project settings; Project Properties > General > Project Defaults > Character Set
Or on the command line;
/DUNICODE /D_UNICODE
The alternative is applicable as well, if UNICODE is not intended to be used, make sure the defines are not set, and/or the multi-character setting is used in the projects and consistently applied.
Do not forget to be consistent between the "Release" and "Debug" builds as well.
Linker errors can happen when a header file and its associated shared library (.lib file) go out of sync. Let me explain.
How do linkers work? The linker matches a function declaration (declared in the header) with its definition (in the shared library) by comparing their signatures. You can get a linker error if the linker doesn't find a function definition that matches perfectly.
Is it possible to still get a linker error even though the declaration and the definition seem to match? Yes! They might look the same in source code, but it really depends on what the compiler sees. Essentially you could end up with a situation like this:
// header1.h
typedef int Number;
void foo(Number);
// header2.h
typedef float Number;
void foo(Number); // this only looks the same lexically
Note how even though both the function declarations look identical in source code, but they are really different according to the compiler.
You might ask how one ends up in a situation like that?Include pathsof course! If when compiling the shared library, the include path leads to header1.h and you end up using header2.h in your own program, you'll be left scratching your header wondering what happened (pun intended).
An example of how this can happen in the real world is explained below.
Further elaboration with an example
I have two projects: graphics.lib and main.exe. Both projects depend on common_math.h. Suppose the library exports the following function:
// graphics.lib
# include "common_math.h"
void draw(vec3 p) { ... } // vec3 comes from common_math.h
And then you go ahead and include the library in your own project.
// main.exe
# include "other/common_math.h"
# include "graphics.h"
int main() {
draw(...);
}
Boom! You get a linker error and you have no idea why it's failing. The reason is that the common library uses different versions of the same include common_math.h (I have made it obvious here in the example by including a different path, but it might not always be so obvious. Maybe the include path is different in the compiler settings).
Note in this example, the linker would tell you it couldn't find draw(), when in reality you know it obviously is being exported by the library. You could spend hours scratching your head wondering what went wrong. The thing is, the linker sees a different signature because the parameter types are slightly different. In the example, vec3 is a different type in both projects as far as the compiler is concerned. This could happen because they come from two slightly different include files (maybe the include files come from two different versions of the library).
Debugging the linker
DUMPBIN is your friend, if you are using Visual Studio. I'm sure other compilers have other similar tools.
The process goes like this:
Note the weird mangled name given in the linker error. (eg. draw@graphics@XYZ).
Dump the exported symbols from the library into a text file.
Search for the exported symbol of interest, and notice that the mangled name is different.
Pay attention to why the mangled names ended up different. You would be able to see that the parameter types are different, even though they look the same in the source code.
Reason why they are different. In the example given above, they are different because of different include files.
[1] By project I mean a set of source files that are linked together to produce either a library or an executable.
EDIT 1: Rewrote first section to be easier to understand. Please comment below to let me know if something else needs to be fixed. Thanks!
The operator<< is being declared as a non-template function. For every type T used with Foo, there needs to be a non-templated operator<<. For example, if there is a type Foo<int> declared, then there must be an operator implementation as follows;
std::ostream& operator<< (std::ostream& os, const Foo<int>& a) {/*...*/}
Since it is not implemented, the linker fails to find it and results in the error.
To correct this, you can declare a template operator before the Foo type and then declare as a friend, the appropriate instantiation. The syntax is a little awkward, but is looks as follows;
// forward declare the Foo
template <typename>
class Foo;
// forward declare the operator <<
template <typename T>
std::ostream& operator<<(std::ostream&, const Foo<T>&);
template <typename T>
class Foo {
friend std::ostream& operator<< <>(std::ostream& os, const Foo<T>& a);
// note the required <> ^^^^
// ...
};
template <typename T>
std::ostream& operator<<(std::ostream&, const Foo<T>&)
{
// ... implement the operator
}
The above code limits the friendship of the operator to the corresponding instantiation of Foo, i.e. the operator<< <int> instantiation is limited to access the private members of the instantiation of Foo<int>.
Alternatives include;
Allowing the friendship to extend to all instantiations of the templates, as follows;
Or, the implementation for the operator<< can be done inline inside the class definition;
template <typename T>
class Foo {
friend std::ostream& operator<<(std::ostream& os, const Foo& a)
{ /*...*/ }
// ...
};
Note*, when the declaration of the operator (or function) only appears in the class, the name is not available for "normal" lookup, only for argument dependent lookup, from cppreference;
A name first declared in a friend declaration within class or class template X becomes a member of the innermost enclosing namespace of X, but is not accessible for lookup (except argument-dependent lookup that considers X) unless a matching declaration at the namespace scope is provided...
There is further reading on template friends at cppreference and the C++ FAQ.
$ gcc -o eg1 -L. -lmy_lib eg1.o
eg1.o: In function `main':
eg1.c:(.text+0x5): undefined reference to `hw'
collect2: error: ld returned 1 exit status
The same result if you compile and link in one step, like:
$ gcc -o eg1 -I. -L. -lmy_lib eg1.c
/tmp/ccQk1tvs.o: In function `main':
eg1.c:(.text+0x5): undefined reference to `hw'
collect2: error: ld returned 1 exit status
A minimal example involving a shared system library, the compression library libz
eg2.c
# include <zlib.h>
# include <stdio.h>
int main()
{
printf("%s\n",zlibVersion());
return 0;
}
Compile your program:
$ gcc -c -o eg2.o eg2.c
Try to link your program with libz and fail:
$ gcc -o eg2 -lz eg2.o
eg2.o: In function `main':
eg2.c:(.text+0x5): undefined reference to `zlibVersion'
collect2: error: ld returned 1 exit status
Same if you compile and link in one go:
$ gcc -o eg2 -I. -lz eg2.c
/tmp/ccxCiGn7.o: In function `main':
eg2.c:(.text+0x5): undefined reference to `zlibVersion'
collect2: error: ld returned 1 exit status
And a variation on example 2 involving pkg-config:
$ gcc -o eg2 $(pkg-config --libs zlib) eg2.o
eg2.o: In function `main':
eg2.c:(.text+0x5): undefined reference to `zlibVersion'
What are you doing wrong?
In the sequence of object files and libraries you want to link to make your program, you are placing the libraries before the object files that refer to them. You need to place the libraries after the object files that refer to them.
Link example 1 correctly:
$ gcc -o eg1 eg1.o -L. -lmy_lib
Success:
$ ./eg1
Hello World
Link example 2 correctly:
$ gcc -o eg2 eg2.o -lz
Success:
$ ./eg2
1.2.8
Link the example 2 pkg-config variation correctly:
By default, a linkage command generated by GCC, on your distro, consumes the files in the linkage from left to right in commandline sequence. When it finds that a file refers to something and does not contain a definition for it, to will search for a definition in files further to the right. If it eventually finds a definition, the reference is resolved. If any references remain unresolved at the end, the linkage fails: the linker does not search backwards.
First,example 1, with static library my_lib.a
A static library is an indexed archive of object files. When the linker finds -lmy_lib in the linkage sequence and figures out that this refers to the static library ./libmy_lib.a, it wants to know whether your program needs any of the object files in libmy_lib.a.
There is only object file in libmy_lib.a, namely my_lib.o, and there's only one thing defined in my_lib.o, namely the function hw.
The linker will decide that your program needs my_lib.o if and only if it already knows that your program refers to hw, in one or more of the object files it has already added to the program, and that none of the object files it has already added contains a definition for hw.
If that is true, then the linker will extract a copy of my_lib.o from the library and add it to your program. Then, your program contains a definition for hw, so its references to hw are resolved.
When you try to link the program like:
$ gcc -o eg1 -L. -lmy_lib eg1.o
the linker has not addedeg1.oto the program when it sees -lmy_lib. Because at that point, it has not seen eg1.o. Your program does not yet make any references to hw: it does not yet make any references at all, because all the references it makes are in eg1.o.
So the linker does not add my_lib.o to the program and has no further use for libmy_lib.a.
Next, it finds eg1.o, and adds it to be program. An object file in the linkage sequence is always added to the program. Now, the program makes a reference to hw, and does not contain a definition of hw; but there is nothing left in the linkage sequence that could provide the missing definition. The reference to hw ends up unresolved, and the linkage fails.
Second,example 2, with shared library libz
A shared library isn't an archive of object files or anything like it. It's much more like a program that doesn't have a main function and instead exposes multiple other symbols that it defines, so that other programs can use them at runtime.
Many Linux distros today configure their GCC toolchain so that its language drivers (gcc,g++,gfortran etc) instruct the system linker (ld) to link shared libraries on an as-needed basis. You have got one of those distros.
This means that when the linker finds -lz in the linkage sequence, and figures out that this refers to the shared library (say) /usr/lib/x86_64-linux-gnu/libz.so, it wants to know whether any references that it has added to your program that aren't yet defined have definitions that are exported by libz
If that is true, then the linker will not copy any chunks out of libz and add them to your program; instead, it will just doctor the code of your program so that:-
At runtime, the system program loader will load a copy of libz into the same process as your program whenever it loads a copy of your program, to run it.
At runtime, whenever your program refers to something that is defined in libz, that reference uses the definition exported by the copy of libz in the same process.
Your program wants to refer to just one thing that has a definition exported by libz, namely the function zlibVersion, which is referred to just once, in eg2.c. If the linker adds that reference to your program, and then finds the definition exported by libz, the reference is resolved
But when you try to link the program like:
gcc -o eg2 -lz eg2.o
the order of events is wrong in just the same way as with example 1. At the point when the linker finds -lz, there are no references to anything in the program: they are all in eg2.o, which has not yet been seen. So the linker decides it has no use for libz. When it reaches eg2.o, adds it to the program, and then has undefined reference to zlibVersion, the linkage sequence is finished; that reference is unresolved, and the linkage fails.
Lastly, the pkg-config variation of example 2 has a now obvious explanation. After shell-expansion:
gcc -o eg2 $(pkg-config --libs zlib) eg2.o
becomes:
gcc -o eg2 -lz eg2.o
which is just example 2 again.
I can reproduce the problem in example 1, but not in example 2
The linkage:
gcc -o eg2 -lz eg2.o
works just fine for you!
(Or: That linkage worked fine for you on, say, Fedora 23, but fails on Ubuntu 16.04)
That's because the distro on which the linkage works is one of the ones that does not configure its GCC toolchain to link shared libraries as-needed.
Back in the day, it was normal for unix-like systems to link static and shared libraries by different rules. Static libraries in a linkage sequence were linked on the as-needed basis explained in example 1, but shared libraries were linked unconditionally.
This behaviour is economical at linktime because the linker doesn't have to ponder whether a shared library is needed by the program: if it's a shared library, link it. And most libraries in most linkages are shared libraries. But there are disadvantages too:-
It is uneconomical at runtime, because it can cause shared libraries to be loaded along with a program even if doesn't need them.
The different linkage rules for static and shared libraries can be confusing to inexpert programmers, who may not know whether -lfoo in their linkage is going to resolve to /some/where/libfoo.a or to /some/where/libfoo.so, and might not understand the difference between shared and static libraries anyway.
This trade-off has led to the schismatic situation today. Some distros have changed their GCC linkage rules for shared libraries so that the as-needed principle applies for all libraries. Some distros have stuck with the old way.
Why do I still get this problem even if I compile-and-link at the same time?
If I just do:
$ gcc -o eg1 -I. -L. -lmy_lib eg1.c
surely gcc has to compile eg1.c first, and then link the resulting object file with libmy_lib.a. So how can it not know that object file is needed when it's doing the linking?
Because compiling and linking with a single command does not change the order of the linkage sequence.
When you run the command above, gcc figures out that you want compilation + linkage. So behind the scenes, it generates a compilation command, and runs it, then generates a linkage command, and runs it, as if you had run the two commands:
So the linkage fails just as it does if you do run those two commands. The only difference you notice in the failure is that gcc has generated a temporary object file in the compile + link case, because you're not telling it to use eg1.o. We see:
Putting interdependent libraries in the wrong order is just one way in which you can get files that need definitions of things coming later in the linkage than the files that provide the definitions. Putting libraries before the object files that refer to them is another way of making the same mistake.
假设您有一个用C++编写的大型项目,其中包含一千个.cpp文件和一千个.h文件,假设该项目也依赖于十个静态库。假设我们在Windows上,并且我们在Visual Studio 20xx中构建我们的项目。当您按Ctrl+F7 Visual Studio开始编译整个解决方案时(假设我们在解决方案中只有一个项目)
29条答案
按热度按时间ahy6op9u1#
需要为新的工具集版本更新Visual Studio NuGet包
我在尝试将libpng链接到Visual Studio 2013时遇到了这个问题。问题是,程序包文件只有用于Visual Studio2010和2012的库。
正确的解决方案是希望开发人员发布一个更新的包,然后进行升级,但这对我来说很管用,我破解了VS2013的一个额外设置,指向VS2012库文件。
我通过在该文件中找到
packagename\build\native\packagename.targets
并复制所有v110
节来编辑包(在解决方案目录内的packages
文件夹中)。我将**条件字段中的v110
更改为v120
**非常小心地将文件名路径全部保留为v110
。这只是允许Visual Studio 2013链接到2012年的库,在这种情况下,它起作用了。vfhzx4xs2#
When linking against shared libraries, make sure that the used symbols are not hidden.
The default behavior of gcc is that all symbols are visible. However, when the translation units are built with option
-fvisibility=hidden
, only functions/symbols marked with__attribute__ ((visibility ("default")))
are external in the resulting shared object.You can check whether the symbols your are looking for are external by invoking:
the hidden/local symbols are shown by
nm
with lowercase symbol type, for examplet
instead of `T for code-section:You can also use
nm
with the option-C
to demangle the names (if C++ was used).Similar to Windows-dlls, one would mark public functions with a define, for example
DLL_PUBLIC
defined as:Which roughly corresponds to Windows'/MSVC-version:
More information about visibility can be found on the gcc wiki.
When a translation unit is compiled with
-fvisibility=hidden
the resulting symbols have still external linkage (shown with upper case symbol type bynm
) and can be used for external linkage without problem if the object files become part of a static libraries. The linkage becomes local only when the object files are linked into a shared library.To find which symbols in an object file are hidden run:
wr98u20j3#
Even though this is a pretty old questions with multiple accepted answers, I'd like to share how to resolve anobscure"undefined reference to" error.
Different versions of libraries
I was using an alias to refer to
std::filesystem::path
: filesystem is in the standard library since C17 but my program needed to**also compile in C14**so I decided to use a variable alias:Let's say I have three files: main.cpp, file.h, file.cpp:
*file.h#include's <experimental::filesystem> and contains the code above
*file.cpp, the implementation of file.h, #include's "file.h"
*main.cpp#include's <filesystem> and "file.h"
Note thedifferent librariesused in main.cpp and file.h. Since main.cpp #include'd "file.h" after <filesystem>, the version of filesystem used there wasthe C++17 one. I used to compile the program with the following commands:
$
g++ -g -std=c++17 -c main.cpp
-> compiles main.cpp to main.o$
g++ -g -std=c++17 -c file.cpp
-> compiles file.cpp and file.h to file.o$
g++ -g -std=c++17 -o executable main.o file.o -lstdc++fs
-> links main.o and file.oThis wayany functioncontained in file.o and used in main.o thatrequired
path_t
gave "undefined reference" errors becausemain.oreferred to**std::filesystem::path
butfile.otostd::experimental::filesystem::path
**.Resolution
To fix this I just needed tochange experimental::filesystem in file.h to .
wz8daaqr4#
Missing "extern" in
const
variable declarations/definitions (C++ only)For people coming from C it might be a surprise that in C++ global
const
variables have internal (or static) linkage. In C this was not the case, as all global variables are implicitlyextern
(i.e. when thestatic
keyword is missing).Example:
correct would be to use a header file and include it in file2.cpp and file1.cpp
Alternatively one could declare the
const
variable in file1.cpp with explicitextern
ymzxtsji5#
Clean and rebuild
A "clean" of the build can remove the "dead wood" that may be left lying around from previous builds, failed builds, incomplete builds and other build system related build issues.
In general the IDE or build will include some form of "clean" function, but this may not be correctly configured (e.g. in a manual makefile) or may fail (e.g. the intermediate or resultant binaries are read-only).
Once the "clean" has completed, verify that the "clean" has succeeded and all the generated intermediate file (e.g. an automated makefile) have been successfully removed.
This process can be seen as a final resort, but is often a good first step; especially if the code related to the error has recently been added (either locally or from the source repository).
a2mppw5e6#
Inconsistent
UNICODE
definitionsA Windows UNICODE build is built with
TCHAR
etc. being defined aswchar_t
etc. When not building withUNICODE
defined as build withTCHAR
defined aschar
etc. TheseUNICODE
and_UNICODE
defines affect all the "T
" string types;LPTSTR
,LPCTSTR
and their elk.Building one library with
UNICODE
defined and attempting to link it in a project whereUNICODE
is not defined will result in linker errors since there will be a mismatch in the definition ofTCHAR
;char
vs.wchar_t
.The error usually includes a function a value with a
char
orwchar_t
derived type, these could includestd::basic_string<>
etc. as well. When browsing through the affected function in the code, there will often be a reference toTCHAR
orstd::basic_string<TCHAR>
etc. This is a tell-tale sign that the code was originally intended for both a UNICODE and a Multi-Byte Character (or "narrow") build.To correct this, build all the required libraries and projects with a consistent definition of
UNICODE
(and_UNICODE
).Project Properties > General > Project Defaults > Character Set
The alternative is applicable as well, if UNICODE is not intended to be used, make sure the defines are not set, and/or the multi-character setting is used in the projects and consistently applied.
Do not forget to be consistent between the "Release" and "Debug" builds as well.
7gyucuyw7#
When your include paths are different
Linker errors can happen when a header file and its associated shared library (.lib file) go out of sync. Let me explain.
How do linkers work? The linker matches a function declaration (declared in the header) with its definition (in the shared library) by comparing their signatures. You can get a linker error if the linker doesn't find a function definition that matches perfectly.
Is it possible to still get a linker error even though the declaration and the definition seem to match? Yes! They might look the same in source code, but it really depends on what the compiler sees. Essentially you could end up with a situation like this:
Note how even though both the function declarations look identical in source code, but they are really different according to the compiler.
You might ask how one ends up in a situation like that?Include pathsof course! If when compiling the shared library, the include path leads to
header1.h
and you end up usingheader2.h
in your own program, you'll be left scratching your header wondering what happened (pun intended).An example of how this can happen in the real world is explained below.
Further elaboration with an example
I have two projects:
graphics.lib
andmain.exe
. Both projects depend oncommon_math.h
. Suppose the library exports the following function:And then you go ahead and include the library in your own project.
Boom! You get a linker error and you have no idea why it's failing. The reason is that the common library uses different versions of the same include
common_math.h
(I have made it obvious here in the example by including a different path, but it might not always be so obvious. Maybe the include path is different in the compiler settings).Note in this example, the linker would tell you it couldn't find
draw()
, when in reality you know it obviously is being exported by the library. You could spend hours scratching your head wondering what went wrong. The thing is, the linker sees a different signature because the parameter types are slightly different. In the example,vec3
is a different type in both projects as far as the compiler is concerned. This could happen because they come from two slightly different include files (maybe the include files come from two different versions of the library).Debugging the linker
DUMPBIN is your friend, if you are using Visual Studio. I'm sure other compilers have other similar tools.
The process goes like this:
[1] By project I mean a set of source files that are linked together to produce either a library or an executable.
EDIT 1: Rewrote first section to be easier to understand. Please comment below to let me know if something else needs to be fixed. Thanks!
yhqotfr88#
Befriending templates...
Given the code snippet of a template type with a friend operator (or function);
The
operator<<
is being declared as a non-template function. For every typeT
used withFoo
, there needs to be a non-templatedoperator<<
. For example, if there is a typeFoo<int>
declared, then there must be an operator implementation as follows;Since it is not implemented, the linker fails to find it and results in the error.
To correct this, you can declare a template operator before the
Foo
type and then declare as a friend, the appropriate instantiation. The syntax is a little awkward, but is looks as follows;The above code limits the friendship of the operator to the corresponding instantiation of
Foo
, i.e. theoperator<< <int>
instantiation is limited to access the private members of the instantiation ofFoo<int>
.Alternatives include;
operator<<
can be done inline inside the class definition;A name first declared in a friend declaration within class or class template X becomes a member of the innermost enclosing namespace of X, but is not accessible for lookup (except argument-dependent lookup that considers X) unless a matching declaration at the namespace scope is provided...
There is further reading on template friends at cppreference and the C++ FAQ.
Code listing showing the techniques above.
warning: friend declaration 'std::ostream& operator<<(...)' declares a non-template function [-Wnon-template-friend]
note: (if this is not what you intended, make sure the function template has already been declared and add <> after the function name here)
nxowjjhe9#
不支持链接器脚本的GNU LD Package 器
有些.so文件实际上是GNU ld linker scripts,例如libtbb.so文件是包含以下内容的ASCII文本文件:
一些更复杂的构建可能不支持这一点。例如,如果在编译器选项中包括-v,则可以看到mainwin gcc wrapper mwdip在要链接的库的详细输出列表中丢弃链接器脚本命令文件。一种简单的解决方法是将链接器脚本输入命令文件替换为该文件的副本(或符号链接),例如
或者,您可以用.so的完整路径替换-l参数,例如,用
/home/foo/tbb-4.3/linux/lib/intel64/gcc4.4/libtbb.so.2
代替-ltbb
beq87vna10#
Your linkage consumes libraries before the object files that refer to them
libfoo
depends onlibbar
, then your linkage correctly putslibfoo
beforelibbar
.undefined reference to
something errors.#include
d and are in fact defined in the libraries that you are linking.Examples are in C. They could equally well be C++
A minimal example involving a static library you built yourself
my_lib.c
my_lib.h
eg1.c
You build your static library:
You compile your program:
You try to link it with
libmy_lib.a
and fail:The same result if you compile and link in one step, like:
A minimal example involving a shared system library, the compression library
libz
eg2.c
Compile your program:
Try to link your program with
libz
and fail:Same if you compile and link in one go:
And a variation on example 2 involving
pkg-config
:What are you doing wrong?
In the sequence of object files and libraries you want to link to make your program, you are placing the libraries before the object files that refer to them. You need to place the libraries after the object files that refer to them.
Link example 1 correctly:
Success:
Link example 2 correctly:
Success:
Link the example 2
pkg-config
variation correctly:The explanation
By default, a linkage command generated by GCC, on your distro, consumes the files in the linkage from left to right in commandline sequence. When it finds that a file refers to something and does not contain a definition for it, to will search for a definition in files further to the right. If it eventually finds a definition, the reference is resolved. If any references remain unresolved at the end, the linkage fails: the linker does not search backwards.
First,example 1, with static library
my_lib.a
A static library is an indexed archive of object files. When the linker finds
-lmy_lib
in the linkage sequence and figures out that this refers to the static library./libmy_lib.a
, it wants to know whether your program needs any of the object files inlibmy_lib.a
.There is only object file in
libmy_lib.a
, namelymy_lib.o
, and there's only one thing defined inmy_lib.o
, namely the functionhw
.The linker will decide that your program needs
my_lib.o
if and only if it already knows that your program refers tohw
, in one or more of the object files it has already added to the program, and that none of the object files it has already added contains a definition forhw
.If that is true, then the linker will extract a copy of
my_lib.o
from the library and add it to your program. Then, your program contains a definition forhw
, so its references tohw
are resolved.When you try to link the program like:
the linker has not added
eg1.o
to the program when it sees-lmy_lib
. Because at that point, it has not seeneg1.o
. Your program does not yet make any references tohw
: it does not yet make any references at all, because all the references it makes are ineg1.o
.So the linker does not add
my_lib.o
to the program and has no further use forlibmy_lib.a
.Next, it finds
eg1.o
, and adds it to be program. An object file in the linkage sequence is always added to the program. Now, the program makes a reference tohw
, and does not contain a definition ofhw
; but there is nothing left in the linkage sequence that could provide the missing definition. The reference tohw
ends up unresolved, and the linkage fails.Second,example 2, with shared library
libz
A shared library isn't an archive of object files or anything like it. It's much more like a program that doesn't have a
main
function and instead exposes multiple other symbols that it defines, so that other programs can use them at runtime.Many Linux distros today configure their GCC toolchain so that its language drivers (
gcc
,g++
,gfortran
etc) instruct the system linker (ld
) to link shared libraries on an as-needed basis. You have got one of those distros.This means that when the linker finds
-lz
in the linkage sequence, and figures out that this refers to the shared library (say)/usr/lib/x86_64-linux-gnu/libz.so
, it wants to know whether any references that it has added to your program that aren't yet defined have definitions that are exported bylibz
If that is true, then the linker will not copy any chunks out of
libz
and add them to your program; instead, it will just doctor the code of your program so that:-libz
into the same process as your program whenever it loads a copy of your program, to run it.libz
, that reference uses the definition exported by the copy oflibz
in the same process.Your program wants to refer to just one thing that has a definition exported by
libz
, namely the functionzlibVersion
, which is referred to just once, ineg2.c
. If the linker adds that reference to your program, and then finds the definition exported bylibz
, the reference is resolvedBut when you try to link the program like:
the order of events is wrong in just the same way as with example 1. At the point when the linker finds
-lz
, there are no references to anything in the program: they are all ineg2.o
, which has not yet been seen. So the linker decides it has no use forlibz
. When it reacheseg2.o
, adds it to the program, and then has undefined reference tozlibVersion
, the linkage sequence is finished; that reference is unresolved, and the linkage fails.Lastly, the
pkg-config
variation of example 2 has a now obvious explanation. After shell-expansion:becomes:
which is just example 2 again.
I can reproduce the problem in example 1, but not in example 2
The linkage:
works just fine for you!
(Or: That linkage worked fine for you on, say, Fedora 23, but fails on Ubuntu 16.04)
That's because the distro on which the linkage works is one of the ones that does not configure its GCC toolchain to link shared libraries as-needed.
Back in the day, it was normal for unix-like systems to link static and shared libraries by different rules. Static libraries in a linkage sequence were linked on the as-needed basis explained in example 1, but shared libraries were linked unconditionally.
This behaviour is economical at linktime because the linker doesn't have to ponder whether a shared library is needed by the program: if it's a shared library, link it. And most libraries in most linkages are shared libraries. But there are disadvantages too:-
-lfoo
in their linkage is going to resolve to/some/where/libfoo.a
or to/some/where/libfoo.so
, and might not understand the difference between shared and static libraries anyway.This trade-off has led to the schismatic situation today. Some distros have changed their GCC linkage rules for shared libraries so that the as-needed principle applies for all libraries. Some distros have stuck with the old way.
Why do I still get this problem even if I compile-and-link at the same time?
If I just do:
surely gcc has to compile
eg1.c
first, and then link the resulting object file withlibmy_lib.a
. So how can it not know that object file is needed when it's doing the linking?Because compiling and linking with a single command does not change the order of the linkage sequence.
When you run the command above,
gcc
figures out that you want compilation + linkage. So behind the scenes, it generates a compilation command, and runs it, then generates a linkage command, and runs it, as if you had run the two commands:So the linkage fails just as it does if you do run those two commands. The only difference you notice in the failure is that gcc has generated a temporary object file in the compile + link case, because you're not telling it to use
eg1.o
. We see:instead of:
See also
The order in which interdependent linked libraries are specified is wrong
Putting interdependent libraries in the wrong order is just one way in which you can get files that need definitions of things coming later in the linkage than the files that provide the definitions. Putting libraries before the object files that refer to them is another way of making the same mistake.
nr7wwzry11#
因为当涉及到链接器错误时,人们似乎被定向到这个问题上,所以我将在这里补充这一点。
GCC 5.2.0的链接器错误的一个可能原因是现在默认选择了一个新的libstdc++库ABI。
如果您收到有关对涉及std::__cxx11命名空间或标记[abi:cxx11]中的类型的符号的未定义引用的链接器错误,则可能表示您正试图将使用不同的_GLIBCXX_USE_CXX11_ABI宏值编译的目标文件链接在一起。当链接到使用较旧版本的GCC编译的第三方库时,通常会发生这种情况。如果第三方库不能用新的ABI重新构建,那么您将需要用旧的ABI重新编译代码。
因此,如果您在切换到5.1.0之后的GCC时突然遇到链接器错误,这将是一件需要检查的事情。
sqyvllje12#
链接的.lib文件与.dll关联
我也有同样的问题。假设我有项目MyProject和TestProject。我已经有效地将MyProject的库文件链接到TestProject。然而,这个库文件是在构建MyProject的DLL时生成的。此外,我没有包含MyProject中所有方法的源代码,而只是访问了DLL的入口点。
为了解决这个问题,我将MyProject构建为Lib,并将TestProject链接到这个.lib文件(我将生成的.lib文件复制粘贴到TestProject文件夹中)。然后,我可以将MyProject重新构建为DLL。它正在编译,因为TestProject所链接的库确实包含MyProject中类中所有方法的代码。
mkh04yzy13#
编译器/IDE中的BUG
我最近遇到了这个问题,结果发现这是Visual Studio Express 2013中的一个错误。我不得不从项目中删除一个源文件,然后重新添加它来克服这个错误。
如果您认为这可能是编译器/IDE中的错误,请尝试以下步骤:
0mkxixxg14#
Use the linker to help diagnose the error
Most modern linkers include a verbose option that prints out to varying degrees;
For gcc and clang; you would typically add
-v -Wl,--verbose
or-v -Wl,-v
to the command line. More details can be found here;For MSVC,
/VERBOSE
(in particular/VERBOSE:LIB
) is added to the link command line./VERBOSE
linker option.ykejflvf15#
假设您有一个用C++编写的大型项目,其中包含一千个.cpp文件和一千个.h文件,假设该项目也依赖于十个静态库。假设我们在Windows上,并且我们在Visual Studio 20xx中构建我们的项目。当您按Ctrl+F7 Visual Studio开始编译整个解决方案时(假设我们在解决方案中只有一个项目)
编译的意义是什么?
编译的第二步是由Linker完成的。Linker应该合并所有的目标文件,并最终生成输出(可以是可执行文件或库)
链接项目的步骤
error LNK2001: unresolved external symbol "void __cdecl foo(void)" (?foo@@YAXXZ)
观察
1.链接器一旦找到一个符号,就不会在其他库中搜索该符号
1.链接库的顺序很重要。
1.如果Linker在静态库中发现外部符号,他会将该符号包含在项目的输出中。但是,如果库是共享的(动态),他不会在输出中包含代码(符号),但可能会发生运行时崩溃
如何解决此类错误
编译器时间错误:
链接器时间错误
#pragma once
允许编译器不包括一个标头,如果该标头已包含在当前已编译的.cpp中